關於Python 3虛擬環境(venv)

Python是一種流行的高級編程語言,廣泛用於Web開發、數據科學、人工智能等領域。為了保持Python的穩定性和可靠性,Python 3引入了venv(虛擬環境)的概念,這使得開發人員可以為每個Python應用程序建立獨立的環境,這些環境之間相互獨立,不會干擾彼此。

venv是Python 3的內置模塊,可以通過命令行創建和管理虛擬環境。下面我們將一步步介紹如何使用venv創建虛擬環境。

首先,打開命令行終端,進入你想要創建虛擬環境的目錄。然後運行以下命令:

“`
python3 -m venv myenv
“`

這個命令將在當前目錄中創建一個名為“myenv”的虛擬環境。你可以將“myenv”替換為你想要的任何名稱。

接下來,啟動虛擬環境。在命令行中運行以下命令:

– 在Windows中:

“`
myenv\Scripts\activate.bat
“`

– 在Mac或Linux中:

“`
source myenv/bin/activate
“`

這會激活虛擬環境,你將看到命令行前面出現了虛擬環境的名稱。

現在你可以安裝你需要的Python庫或模塊。在虛擬環境中運行以下命令:

“`
pip install <package_name>
“`

當你完成開發工作後,你可以退出虛擬環境。在命令行中運行以下命令:

“`
deactivate
“`

這將退出虛擬環境,你將回到原來的Python環境。

總之,venv是一個非常有用的工具,它可以幫助開發人員管理Python應用程序的依賴性並避免版本衝突。如果你還沒有使用venv,那麼現在是時候開始使用它了!

One Program Written in Python, Go, and Rust

Python, Go, Rust mascots

Update (2019-07-04): Some kind folks have suggested changes on the implementations to make them more idiomatic, so the code here may differ from what’s currently in the repos.


This is a subjective, primarily developer-ergonomics-based comparison of the three languages from the perspective of a Python developer, but you can skip the prose and go to the code samplesthe performance comparison if you want some hard numbers, the takeaway for the tl;dr, or the PythonGo, and Rust diffimg implementations.

A few years ago, I was tasked with rewriting an image processing service. To tell whether my new service was creating the same output as the old given an image and one or more transforms (resize, make a circular crop, change formats, etc.), I had to inspect the images myself. Clearly I needed to automate this, but I could find no existing Python library that simply told me how different two images were on a per-pixel basis. Hence diffimg, which can give you a difference ratio/percentage, or generate a diff image (check out the readme to see an example).

The initial implementation was in Python (the language I’m most comfortable in), with the heavy lifting done by Pillow. It’s usable as a library or a command line tool. The actual meatof the program is very small, only a few dozen lines, thanks to Pillow. Not a lot of effort went into building this tool (xkcd was right, there’s a Python module for nearly everything), but it’s at least been useful for a few dozen people other than myself.

A few months ago, I joined a company that had several services written in Go, and I needed to get up to speed quickly on the language. Writing diffimg-go seemed like an fun and possibly even useful way to do this. Here are a few points of interest that came out of the experience, along with some that came up while using it at work:

Comparing Python and Go

(Again, the code: diffimg (python) and diffimg-go)

  • Standard Library: Go comes with a decent image standard library module, as well as a command line flag parsing library. I didn’t need to look for any external dependencies; the diffimg-go implementation has none, where the Python implementation uses the fairly heavy third party module (ironically) named Pillow. Go’s standard library in general is more structured and well thought out, while Python’s is organically evolved, created by many authors over years, with many differing conventions. The Go standard library’s consistency makes it easier to predict how any given module will function, and the source code is extremely well documented.
    • One downside of using the standard image library is that it does not automatically detect if the image has an alpha channel; pixel values have four channels (RGBA) for all image types. The diffimg-go implementation therefore requires the user to indicate whether or not they want to use the alpha channel. This small inconvenience wasn’t worth finding a third party library to fix.
    • One big upside is that there’s enough in the standard library that you don’t need a web framework like Django. It’s possible to build a real, usable web service in Go without any dependencies. Python’s claim is that it’s batteries-included, but Go does it better, in my opinion.
  • Static Type System: I’ve used statically typed languages in the past, but my programming for the past few years has mostly been in Python. The experience was somewhat annoying at first, it felt as though it was simply slowing me down and forcing me to be excessively explicit whereas Python would just let me do what I wanted, even if I got it wrong occasionally. Somewhat like giving instructions to someone who always stops you to ask you to clarify what you mean, versus someone who always nods along and seems to understand you, though you’re not always sure they’re absorbing everything. It will decrease the amount of type-related bugs for free, but I’ve found that I still need to spend nearly the same amount of time writing tests.
    • One of the common complaints of Go is that it does not have user-implementable generic types. While this is not a must-have feature for building a large, extensible application, it certainly slows development speed. Alternative patterns have been suggested, but none of them are as effective as having real generic types.
    • One upside of the static type system is that it reading through an unfamiliar codebase is easier and faster. Good use of types imbues a lot of extra information that is lost with a dynamic type system.
  • Interfaces and Structs: Go uses interfaces and structs where Python would use classes. This was probably the most interesting difference to me, as it forced me to differentiate the concept of a type that defines behavior versus a type that holds information. Python and other “traditionally object-oriented” languages would encourage you to mash these together, but there are pros and cons to both paradigms:
    • Go heavily encourages composition over inheritance. While it has inheritance via embedding, without classes, it’s not as easy to forward both data and methods. I generally agree that composition is the better default pattern to reach for, but I’m not an absolutist and some situations are a better fit for inheritance, so I’d prefer not to have the language make this decision for me.
    • Divorcing implementations for interfaces means you need to write similar code several times if you have many types that are similar to each other. Because of the lack of generic types, there are situations in Go where I wouldn’t be able to reuse code, though I would in Python.
    • However, because Go is statically typed, the compiler/linter will tell you when you’re writing code that would have caused a runtime error in Python when you try to access a method or attribute that may not exist. Python linters can get a bit of this functionality, but because of the language’s dynamicity, the linter can’t know exactly what methods/attributes will exist until runtime. Statically defined interfaces and structs are the only way to know what’s available at compile time and during development, making Go that compiles more trustworthy than Python that runs.
  • No Optional Arguments: Go only has variadic functions which are similar to Python’s keyword arguments, but less useful, since the arguments need to be of the same type. I found keyword arguments to be something I really missed, mainly for how much easier refactoring is if you can just throw a kwarg of any type onto whatever function needs it without having to rewrite every one of its calls. I use this feature quite often in at work, it’s saved me a lot of time over the years. Not having the feature made my implementation for how to handle whether or not the diff image should be created based on the command line flags somewhat clumsy.
  • Verbosity: Go is a bit more verbose (though not Java verbose). Part of that is because type system does not have generics, but mainly the fact that the language itself is very small and not heavily loaded with features (you only get one looping construct!). I missed having Python’s list comprehensions and other functional programming features. If you’re comfortable with Python, you can go through the Tour of Go in a day or two, and you’ll have been exposed to the entirety of the language.
  • Error Handling: Python has exceptions, whereas Go propagates errors by returning tuples: value, error from functions wherever something may go wrong. Python lets you catch errors at any point in the call stack as opposed to requiring you to manually pass them back up over and over again. This again results in brevity and code that isn’t littered with Go’s infamous if err != nil pattern, though you do need to be aware of what possible exceptions can be thrown by a function and all(!) of its internal calls (using except Exception: is a usually-bad-practice workaround for this). Good docstrings and tests can help here, which you should be writing in either language. Go’s system is definitely safer. You’re still allowed to shoot yourself in the foot by ignoring the err value, but the system makes it obvious that this is a bad idea.
  • Third Party Modules: Prior to Go modules, Go’s package manager would just throw all downloaded packages into $GOPATH/src instead of the project’s directory (like most other languages). The path for these modules inside $GOPATH would also be built from the URL where the package is hosted, so your import would look something like import "github.com/someuser/somepackage". Embedding github.cominside the source code of almost all Go codebases seems like a strange choice. In any case, Go now allows the conventional way of doing things, but Go modules are still new so this quirk will remain common in wild Go code for some time.
  • Asynchronicity: Goroutines are a very convenient way to fire off asynchronous tasks. Before async/await, Python’s asynchronous solutions were somewhat hairy. Unfortunately I haven’t written much real-world async code in Python or Go, and the simplicity of diffimg didn’t seem to lend itself to the added overhead of asynchronicity, so I don’t have too much to say here, though I do like Go’s channels as a way to handle multiple async tasks. My understanding is that for performance, Go still has the upper hand here as goroutines can make use of full multiprocessor parallelism, where Python’s basic async/await is still stuck on one processor, so mainly useful for I/O bound tasks.
  • Debugging: Python wins. pdb (and more sophisticated options like ipdb are available) is extremely flexible, once you’ve entered the REPL, you’re able to write whatever code you want. Delve is a good debugger, but it’s not the same as dropping straight into an interpreter, the full power of the language at your fingertips.

Go Summary

My initial impression of Go is that because its ability to abstract is (purposely) limited, it’s not as fun a language as Python is. Python has more features and thus more ways of doing something, and it can be a lot of fun to find the fastest, most readable, or “cleverest” solution. Go actively tries to stop you from being “clever.” I would go as far as saying that Go’s strength is that it’s not clever.

Its minimalism and lack of freedom are constraining as a single developer just trying to materialize an idea. However, this weakness becomes its strength when the project scales to dozens or hundreds of developers – because everyone’s working with the same small toolset of language features, it’s more likely to be uniform and thus understandable by others. It’s still very possible to write bad Go, but it’s more difficult to create monstrosities that more “powerful” languages will let you produce.

After using it for a while, it makes sense to me why a company like Google would want a language like this. New engineers are being introduced to enormous codebases constantly, and in a messier/more powerful language and under the pressure of deadlines, complexity could be introduced faster than it can be removed. The best way to prevent that is with a language that has less capacity for it.

With that said, I’m happy to work on a Go codebase in the context of a large application with a diverse and ever-growing team. In fact, I think I’d prefer it. I just have no desire to use it for my own personal projects.

Enter Rust

A few weeks ago, I decided to give an honest go at learning Rust. I had attempted to do so before but found the type system and borrow checker confusing and without enough context for why all these constraints were being forced on me, cumbersome for the tasks I was trying to do. However, since then, I’ve learned a bit more about what happens with memory during the execution of a program. I also started with the book instead of just attempting to dive in headfirst. This was massively helpful, and probably the best introduction to any programming language I’ve ever experienced.

After I had gone through the first dozen or so chapters of the book, I felt confident enough to try another implementation of diffimg (at this point, I had about as much experience with Rust as I’d had with Go when I wrote diffimg-go). It took me a bit longer to write than the Go implementation, which itself took longer than Python. I think this would be true even taking into account my greater comfort with Python – there’s just more to write in both languages.

Some of the things that I took notice of when writing diffimg-rs:

  • Type System: I was comfortable with the more basic static type system of Go by now, but Rust’s is significantly more powerful (and complicated). Generic types, enumerated types, traits, reference types, lifetimes are all additional concepts that I had to learn on top of Go’s much simpler interfaces and structs. Additionally, Rust uses its type system to implement features that other languages don’t use the type system for (example: the Result type, which I’ll talk about soon). Luckily, the compiler/linter is extremely helpful in telling you what you’re doing wrong, and often even tells you exactly how to fix it. Despite this, I’ve spent significantly more time than I did learning Go’s type system and I’m still not comfortable with all the features yet.
    • There was one place where because of the type system, the implementation of the imaging library I was using would have led to an uncomfortable amount of code repetition. I only ended up matching the two most important enum types, but matching the others would lead another half dozen or so lines of nearly identical code. At this scale it’s not an issue, but it rubs me the wrong way. Maybe it’s a good candidate for using macros, which I still need to experiment with.
      let mut diff = match image1.color() {
          image::ColorType::RGB(_) => image::DynamicImage::new_rgb8(w, h),
          image::ColorType::RGBA(_) => image::DynamicImage::new_rgba8(w, h),
          // keep going for all 7 types?
          _ => return Err(
              format!("color mode {:?} not yet supported", image1.color())
          ),
      };
      
  • Manual Memory Management: Python and Go pick up your trash for you. C lets you litter everywhere, but throws a fit when it steps on your banana peel. Rust slaps you and demands that you clean up after yourself. This stung at first, since I’m spoiled and usually have my languages pick up after me, moreso even than moving from a dynamic to a statically typed language. Again, the compiler tries to help you as much as is possible, but there’s still a good amount of studying you’ll need to do to understand what’s really going on.
    • One nice part about having such direct access to the memory (and the functional programming features of Rust) is that it simplified the difference ratio calculationbecause I could simply map over the raw byte arrays instead of having to index each pixel by coordinate.
  • Functional Features: Rust strongly encourages a functional approach: it has a FP-friendly type system like Haskell, immutable types, closures, iterators, pattern matching, and more, but also allows imperative code. It’s similar to writing OCaml (interestingly, the original Rust compiler was written in OCaml). Because of this, code is more concise than you’d expect for a language that competes with C.
  • Error Handling: Instead of the exception model that Python uses or the tuple returns that Go uses for error handling, Rust makes use of its enumerated types: Resultreturns either Ok(value) or Err(error). This is closer to Go’s way if you squint, but is a bit more explicit and leverages the type system. There’s also syntactic sugar for checking a statement for an Err and returning early: the ? operator (Go could use something like this, IMO).
  • Asynchronicity: Async/await hasn’t quite landed for Rust yet, but the final syntax has recently been agreed upon. Rust also has some basic threading features in the standard library that seem a bit easier to use than Python’s, but I haven’t spent much time with it. Go still seems to have the best offerings here.
  • Toolingrustup and cargo are extremely polished implementations of a language version manager and package/module manager, respectively. Everything “just works.” I especially love the autogenerated docs. The Python options for these are somewhat organic and finicky, and as I mentioned before, Go has a strange way of managing modules, though aside from that, its tooling is in a much better state than Python’s.
  • Editor Plugins: My .vimrc is embarrassingly large, with at least three dozen plugins. I have some plugins for linting, autocompleting, and formatting both Python and Go, but the Rust plugins were easier to set up, more helpful, and more consistent compared to the other two languages. The rust.vim and vim-lsp plugins (along with the Rust Language Server) were all I needed to get an extremely powerful configuration. I haven’t tested out other editors with Rust but with the excellent editor-agnostic tooling that Rust comes with, I’d expect them to be just as helpful. The setup provides the best go-to-definition I’ve ever used. It works perfectly on local, standard library, and third-party code out of the box.
  • Debugging: I haven’t tried out a debugger with Rust yet (since the type system andprintln! take you pretty far), but you can use rust-gdb and rust-lldb, wrappers around the gdb and lldb debuggers that are installed with the initial rustup. The experience should be predictable if you’ve used those debuggers before with C. As mentioned previously, the compiler error messages are extremely helpful.

Rust Summary

I definitely wouldn’t recommend attempting to write Rust without at least going through the first few chapters of the book, even if you’re already familiar with C and memory management. With Go and Python, as long as you have some experience with another modern imperative programming language, they’re not difficult to just start writing, referring to the docs when necessary. Rust is a large language. Python also has a lot of features, but they’re mostly opt-in. You can get a lot done just by understanding a few primitive data structures and some builtin functions. With Rust, you really need to understand the complexity inherent to the type system and borrow checker, or you’re going to be getting tangled up a lot.

As far as how I feel when I write Rust, it’s a lot of fun, like Python. Its breadth of features makes it very expressive. While the compiler stops you a lot, it’s also very helpful, and its suggestions on how to solve your borrowing/typing problems usually work. The tooling as I’ve mentioned is the best I’ve encountered for any language and doesn’t bring me a lot of headaches like some other languages I’ve used. I really like using the language and will continue to look for opportunities to do so, where the performance of Python isn’t good enough.

Code Samples

I’ve extracted the chunks of each diffimg which calculate the difference ratio. To summarize how it works for Python, this takes the diff image generated by Pillow, sums the values of all channels of all pixels, and returns the ratio produced by dividing the maximum possible value (a pure white image of the same size) by this sum.

Python:


diff_img = ImageChops.difference(im1, im2)
stat = ImageStat.Stat(diff_img)
sum_channel_values = sum(stat.mean)
max_all_channels = len(stat.mean) * 255
diff_ratio = sum_channel_values / max_all_channels

For Go and Rust, the method is a little different: Instead of creating a diff image, we just loop over both input images and keep a running sum of the differences of each pixel. In Go, we’re indexing into each image by coordinate…

Go:


func GetRatio(im1, im2 image.Image, ignoreAlpha bool) float64 {
  var sum uint64
  width, height := getWidthAndHeight(im1)
  for y := 0; y < height; y++ {
    for x := 0; x < width; x++ {
      sum += uint64(sumPixelDiff(im1, im2, x, y, ignoreAlpha))
    }
  }
  var numChannels = 4
  if ignoreAlpha {
    numChannels = 3
  }
  totalPixVals := (height * width) * (maxChannelVal * numChannels)
  return float64(sum) / float64(totalPixVals)
}

… but in Rust, we’re treating the images as what they really are in memory, a series of bytes that we can just zip together and consume.

Rust:


pub fn calculate_diff(
    image1: DynamicImage,
    image2: DynamicImage
  ) -> f64 {
  let max_val = u64::pow(2, 8) - 1;
  let mut diffsum: u64 = 0;
  for (&p1, &p2) in image1
      .raw_pixels()
      .iter()
      .zip(image2.raw_pixels().iter()) {
    diffsum += u64::from(abs_diff(p1, p2));
  }
  let total_possible = max_val * image1.raw_pixels().len() as u64;
  let ratio = diffsum as f64 / total_possible as f64;

  ratio
}

Some things to take note of in these examples:

  • Python has the least code by far. Obviously, it’s leaning heavily on features of the image library it’s using, but this is indicative of the general experience of using Python. In many cases, a lot of the work has been done for you because the ecosystem is so developed that there are mature pre-existing solutions for everything.
  • There’s type conversion in the Go and Rust examples. In each block there are three numerical types being used: uint8/u8 for the pixel channel values (the type is inferred in both Go and Rust, so you don’t see any explicit mention of either type),uint64/u64 for the sum, and float64/f64 for the final ratio. For Go and Rust, there was time spent getting the types to line up, whereas Python converts everything implicitly.
  • The Go implementation’s style is very imperative, but also explicit and understandable (minus the ignoreAlpha part I mentioned earlier), even to those unaccustomed to the language. The Python example is fairly clear as well, once you understand what ImageStat is doing. Rust is definitely murkier to those unfamiliar with the language:
    • .raw_pixels() gets the image as a vector of unsigned 8-bit integers.
    • .iter() creates an iterator for that vector. Vectors by default are not iterable.
    • .zip() you may be familiar with, it takes two iterators and produces one, with each element being a tuple: (element from first vector, element from second vector).
    • We need a mut in our diffsum declaration because by default, variables are immutable.
    • If you’re familiar with C you can probably figure out why we have the &s in for (&p1, &p2): The iterator produces references to the pixel values, but abs_diff() takes the values themselves. Go supports pointers (which are not quite the same as references), but they’re not as commonly used as references are in Rust.
    • The last statement in a function is used as the return value if there isn’t a line-ending ;. A few other functional languages do this as well.

    This snippet gives you some insight into how much language-specific knowledge you’ll need to pick up to be effective in Rust.

Performance

Now for something resembling a scientific comparison. I first generated three random images of different sizes: 1×1, 2000×2000, and 10,000×10,000. Then I measured each (language, image size) combination’s performance 10 times for each diffimg ratio calculation and averaged them, using the values given by the real values from the timecommand. diffimg-rs was built using --releasediffimg-go with just go build, and the Python diffimg invoked with python3 -m diffimg. The results, on a 2015 Macbook Pro:

Image size: 1×1 2000×2000 10,000×10,000
Rust 0.001s 0.490s 5.871s
Go 0.002s (2x) 0.756s (1.54x) 14.060s (2.39x)
Python 0.095s (95x) 1.419s (2.90x) 28.751s (4.89x)

I’m losing a lot of precision because time only goes down to 10ms resolution (one more digit is shown here because of the averaging). The task only requires a very specific type of calculation as well, so a different or more complex one could have very different numbers. Despite these caveats, we can still learn something from the data.

With the 1×1 image, virtually all the time is spent in setup, not ratio calculation. Rust wins, despite using two third-party libraries (clap and image) and Go only using the standard library. I’m not surprised Python’s startup is as slow as it is, since importing a large library (Pillow) is one of its steps, and even just time python -c '' takes 0.030s.

At 2000×2000, the gap narrows for both Go and Python compared to Rust, presumably because less of the overall time is spent in setup compared to calculation. However, at 10,000×10,000, Rust is more performant in comparison, which I would guess is due to its compiler’s optimizations producing the smallest block of machine code that is looped through 100,000,000 times, dwarfing the setup time. Never needing to pause for garbage collection could also be a factor.

The Python implementation definitely has room for improvement, because as efficient as Pillow is, we’re still creating a diff image in memory (traversing both input images) and then adding up each of its pixel’s channel values. A more direct approach like the Go and Rust implementations would probably be marginally faster. However, a pure Python implementation would be wildly slower, since Pillow does its main work in C. Because the other two are pure language implementations, this isn’t really a fair comparison, though in some ways it is, because Python has an absurd amount of libraries available to you that are performant thanks to C extensions (and Python and C have a very tight relationship in general).

I should also mention the binary sizes: Rust’s is 2.1mb with the --release build, and Go’s is comparable at 2.5mb. Python doesn’t create binaries, but .pyc files are sort ofcomparable, and diffimg’s .pyc files are about 3kb in total. Its source code is also only about 3kb, but including the Pillow dependency, it weighs in at 24mb(!). Again, not a fair comparison because I’m using a third party imaging library, but it should be mentioned.

The Takeaway

Obviously, these are three very different languages fulfilling different niches. I’ve heard Go and Rust often mentioned together, but I think Go and Python are the two more similar/competing languages. They’re both good for writing server-side application logic (what I spend most of my time doing at work). Comparing just native code performance, Go blows Python away, but many of Python’s libraries that require speed are wrappers around fast C implementations – in practice, it’s more complicated than a naive comparison. Writing a C extension for Python doesn’t really count as Python anymore (and then you’ll need to know C), but the option is open to you.

For your backend server needs, Python has proven itself to be “fast enough” for most applications, though if you need more performance, Go has it. Rust even more so, but you pay for it with development time. Go is not far off from Python in this regard, though it certainly is slower to develop, primarily due to its small feature set. Rust is very fully featured, but managing memory will always take more time than having the language do it, and this outweighs having to deal with Go’s minimality.

It should also be mentioned that there are many, many Python developers in the world, some with literally decades of experience. It will likely never be hard to find more people with language experience to add to your backend team if you choose Python. However, Go developers are not particularly rare, and can easily be created because the language is so easy to learn. Rust developers are both rarer and harder to make since the language takes longer to internalize.

With respect to the type systems: static type systems make it easier to write more correct code, but it’s not a panacea. You still need to write comprehensive tests no matter the language you use. It requires a bit more discipline, but I’ve found that the code I write in Python is not necessarily more error prone than Go as long as I’m able to write a good suite of tests. That said, I much prefer Rust’s type system to Go’s: it supports generics, pattern matching, handles errors, and just does more for you in general.

In the end, this comparison is a bit silly, because though the use cases of these languages overlap, they occupy very different niches. Python is high on the development-speed, low on the performance scale, while Rust is the opposite, and Go is in the middle. I enjoy writing Python and Rust more than Go (this may be unsurprising), though I’ll continue to use Go at work happily (along with Python) since it really is a great language for building stable and maintainable applications with many contributors from many backgrounds. Its inflexibility and minimalism which makes it less enjoyable to use (for me) becomes its strength here. If I had to choose the language for the backend of a new web application, it would be Go.

I’m pretty satisfied with the range of programming tasks that are covered by these three languages – there’s virtually no project that one of them wouldn’t be a great choice for.

Top 10 Python Libraries You Must Know in 2019

In this article, we will discuss some of the top libraries in Python that can be used by developers to prase, clean, and represent data and implement machine learning in their existing applications.

We will be considering the following 10 libraries:

  • TensorFlow
  • Scikit-Learn
  • Numpy
  • Keras
  • PyTorch
  • LightGBM
  • Eli5
  • SciPy
  • Theano
  • Pandas

Image title

Introduction

Python is one of the most popular and widely used programming languages and has replaced many programming languages in the industry.

There are many reasons why Python is popular among developers. However, one of the most significant is its large collection of libraries that users can work with.

The simplicity of Python has attracted many developers to create new libraries for machine learning. Because of the huge collection of libraries, Python is becoming hugely popular among machine learning experts.

So, the first library is TensorFlow.

TensorFlow

Top 10 Python Libraries - Edureka

What Is TensorFlow?

If you are currently working on a machine learning project in Python, then you may have heard about this popular open-source library known as TensorFlow.

This library was developed by Google in collaboration with the Brain Team. TensorFlow is used in almost every Google application for machine learning.

TensorFlow works like a computational library for writing new algorithms that involve a large number of tensor operations. Since neural networks can be easily expressed as computational graphs, they can be implemented using TensorFlow as a series of operations on Tensors. Plus, tensors are N-dimensional matrices that represent your data.

Features of TensorFlow

TensorFlow is optimized for speed, and it makes use of techniques like XLA for quick linear algebra operations.

1. Responsive Construct

With TensorFlow, we can easily visualize each and every part of the graph, which is not an option while using Numpy or SciKit.

2. Flexible

One of the very important Tensorflow Features is that it is flexible in its operability, meaning it has modularity, and for the parts of it that you want to make stand alone, it offers you that option.

3. Easily Trainable

It is easily trainable on CPU as well as GPU for distributed computing.

4. Parallel Neural Network Training

TensorFlow offers pipelining, in the sense that you can train multiple neural networks and multiple GPUs, which makes the models very efficient on large-scale systems.

5. Large Community

Needless to say, if it has been developed by Google, there is already a large team of software engineers who work on stability improvements continuously.

6. Open Source

The best thing about this machine learning library is that it is open source, so anyone can use it as long as they have internet connectivity.

Where Is TensorFlow Used?

You are using TensorFlow daily but indirectly with applications like Google Voice Search or Google Photos. These applications are developed using this library.

All the libraries created in TensorFlow are written in C and C++. However, it has a complicated frontend for Python. Your Python code will get compiled and then executed on TensorFlow distributed execution engine built using C and C++.

The number of applications of TensorFlow is literally unlimited, and that is the beauty of TensorFlow.

Scikit-Learn

Top 10 Python Libraries - Edureka

What Is Scikit-learn?

It is a Python library is associated with NumPy and SciPy. It is considered one of the best libraries for working with complex data.

There are a lot of changes being made in this library. One modification is the cross-validation feature, providing the ability to use more than one metric. Lots of training methods like logistics regression and nearest neighbors have received some little improvements.

Features Of Scikit-Learn

1. Cross-validation: There are various methods to check the accuracy of supervised models on unseen data.

2.Unsupervised learning algorithms: Again, there is a large spread of algorithms in the offering — starting from clustering, factor analysis, and principal component analysis to unsupervised neural networks.

3. Feature extraction: Useful for extracting features from images and text (e.g. Bag of words

Where Is Scikit-Learn Used?

It contains a numerous number of algorithms for implementing standard machine learning and data mining tasks like reducing dimensionality, classification, regression, clustering, and model selection.

Numpy

Top 10 Python Libraries - Edureka

What Is Numpy?

Numpy is considered one of the most popular machine learning libraries in Python.

TensorFlow and other libraries use Numpy internally for performing multiple operations on Tensors. Array interface is the best and the most important feature of Numpy.

Features Of Numpy

  1. Interactive: Numpy is very interactive and easy to use
  2. Mathematics: Makes complex mathematical implementations very simple
  3. Intuitive: Makes coding real easy and grasping the concepts is easy
  4. Lots of Interaction: Widely used, hence a lot of open source contribution

Where Is Numpy Used?

This interface can be utilized for expressing images, sound waves, and other binary raw streams as an array of real numbers in N-dimensional.

For implementing this library for machine learning, having knowledge of Numpy is important for full-stack developers.

Keras

Top 10 Python Libraries - Edureka

What Is Keras?

Keras is considered one of the coolest machine learning libraries in Python. It provides an easier mechanism to express neural networks. Keras also provides some of the best utilities for compiling models, processing data-sets, visualization of graphs, and much more.

In the backend, Keras uses either Theano or TensorFlow internally. Some of the most popular neural networks like CNTK can also be used. Keras is comparatively slow when we compare it with other machine learning libraries because it creates a computational graph by using back-end infrastructure and then makes use of it to perform operations. All the models in Keras are portable.

Features Of Keras

  • It runs smoothly on both CPU and GPU.
  • Keras supports almost all the models of a neural network — fully connected, convolutional, pooling, recurrent, embedding, etc. Furthermore, these models can be combined to build more complex models.
  • Keras, being modular in nature, is incredibly expressive, flexible, and apt for innovative research.
  • Keras is a completely Python-based framework, which makes it easy to debug and explore.

Where Is Keras Used?

You are already constantly interacting with features built with Keras — it is in use at Netflix, Uber, Yelp, Instacart, Zocdoc, Square, and many others. It is especially popular among startups that place deep learning at the core of their products.

Keras contains numerous implementations of commonly used neural network building blocks such as layers, objectives, activation functions, optimizers and a host of tools to make working with image and text data easier.

Plus, it provides many pre-processed data-sets and pre-trained models like MNIST, VGG, Inception, SqueezeNet, ResNet, etc.

Keras is also a favorite among deep learning researchers, coming in at #2. Keras has also been adopted by researchers at large scientific organizations, in particular, CERN and NASA.

PyTorch

Top 10 Python Libraries - Edureka

What Is PyTorch?

PyTorch is the largest machine learning library that allows developers to perform tensor computations with the acceleration of GPU, creates dynamic computational graphs, and calculate gradients automatically. Other than this, PyTorch offers rich APIs for solving application issues related to neural networks.

This machine learning library is based on Torch, which is an open-source machine library implemented in C with a wrapper in Lua.

This machine library, in Python, was introduced in 2017, and since its inception, the library is gaining popularity and attracting an increasing number of machine learning developers.

Features Of PyTorch

Hybrid Front-End

A new hybrid frontend provides ease-of-use and flexibility in eager mode, while seamlessly transitioning to graph mode for speed, optimization, and functionality in C++ runtime environments.

Distributed Training

Optimize performance in both research and production by taking advantage of native support for asynchronous execution of collective operations and peer-to-peer communication that is accessible from Python and C++.

Python First

PyTorch is not a Python binding into a monolithic C++ framework. It’s built to be deeply integrated into Python so it can be used with popular libraries and packages such as Cython and Numba.

Libraries and Tools

An active community of researchers and developers have built a rich ecosystem of tools and libraries for extending PyTorch and supporting development in areas from computer vision to reinforcement learning.

Where Is PyTorch Used?

PyTorch is primarily used for applications such as natural language processing.

It is primarily developed by Facebook’s artificial-intelligence research group and Uber’s “Pyro” software for probabilistic programming is built on it.

PyTorch is outperforming TensorFlow in multiple ways and it is gaining a lot of attention in recent days.

LightGBM

Top 10 Python Libraries - Edureka

What Is LightGBM?

Gradient Boosting is one of the best and most popular machine learning(ML) library, which helps developers in building new algorithms by using redefined elementary models and namely decision trees. Therefore, there are special libraries that are designed for fast and efficient implementation of this method.

These libraries are LightGBM, XGBoost, and CatBoost. All these libraries are competitors that help in solving a common problem and can be utilized in almost a similar manner.

Features of LightGBM

Very fast computation ensures high production efficiency.

Intuitive, hence makes it user-friendly.

Faster training than many other deep learning libraries.

Will not produce errors when you consider NaN values and other canonical values.

Where Is LightGBM Used?

This library provides highly scalable, optimized, and fast implementations of gradient boosting, which makes it popular among machine learning developers. Because most of the machine learning full-stack developers won machine learning competitions by using these algorithms.

Eli5

Top 10 Python Libraries - Edureka

What Is Eli5?

Most often, the results of machine learning model predictions are not accurate, and Eli5 machine learning library built-in Python helps in overcoming this challenge. It is a combination of visualization and debugs all the machine learning models and tracks all working steps of an algorithm.

Features of Eli5

Moreover, Eli5 supports other libraries XGBoost, lightning, scikit-learn, and sklearn-crfsuite libraries. All the above-mentioned libraries can be used to perform different tasks using each one of them.

Where Is Eli5 Used?

  • Mathematical applications that require a lot of computation in a short time.
  • Eli5 plays a vital role where there are dependencies with other Python packages.
  • Legacy applications and implementing newer methodologies in various fields.

SciPy

Top 10 Python Libraries - Edureka

What Is SciPy?

SciPy is a machine learning library for application developers and engineers. However, you still need to know the difference between SciPy library and SciPy stack. SciPy library contains modules for optimization, linear algebra, integration, and statistics.

Features Of SciPy

The main feature of the SciPy library is that it is developed using NumPy, and its array makes the most use of NumPy.

In addition, SciPy provides all the efficient numerical routines like optimization, numerical integration, and many others using its specific submodules.

All the functions in all submodules of SciPy are well documented.

Where Is SciPy Used?

SciPy is a library that uses NumPy for the purpose of solving mathematical functions. SciPy uses NumPy arrays as the basic data structure and comes with modules for various commonly used tasks in scientific programming.

Tasks including linear algebra, integration (calculus), ordinary differential equation solving and signal processing are handled easily by SciPy.

Theano

Top 10 Python Libraries - Edureka

What Is Theano?

Theano is a computational framework machine learning library in Python for computing multidimensional arrays. Theano works similar to TensorFlow, but it not as efficient as TensorFlow. Because of its inability to fit into production environments.

Moreover, Theano can also be used on a distributed or parallel environments just similar to TensorFlow.

Features Of Theano

  • Tight integration with NumPy – Ability to use completely NumPy arrays in Theano-compiled functions.
  • Transparent use of a GPU – Perform data-intensive computations much faster than on a CPU.
  • Efficient symbolic differentiation – Theano does your derivatives for functions with one or many inputs.
  • Speed and stability optimizations – Get the right answer for log(1+x) even when x is very tiny. This is just one of the examples to show the stability of Theano.
  • Dynamic C code generation – Evaluate expressions faster than ever before, thereby increasing efficiency by a lot.
  • Extensive unit-testing and self-verification – Detect and diagnose multiple types of errors and ambiguities in the model.

Where Is Theano Used?

The actual syntax of Theano expressions is symbolic, which can be off-putting to beginners used to normal software development. Specifically, an expression is defined in the abstract sense, compiled, and later actually used to make calculations.

It was specifically designed to handle the types of computation required for large neural network algorithms used in Deep Learning. It was one of the first libraries of its kind (development started in 2007) and is considered an industry standard for Deep Learning research and development.

Theano is being used in multiple neural network projects today, and the popularity of Theano is only growing with time.

Pandas

Top 10 Python Libraries - Edureka

What Is Pandas?

Pandas is a machine learning library in Python that provides data structures of high-level and a wide variety of tools for analysis. One of the great features of this library is the ability to translate complex operations with data using one or two commands. Pandas has so many inbuilt methods for grouping, combining data, filtering, as well as time-series functionality.

All these are followed by outstanding speed indicators.

Features Of Pandas

Pandas makes sure that the entire process of manipulating data will be easier. Support for operations such as Re-indexing, Iteration, Sorting, Aggregations, Concatenations, and Visualizations are among the feature highlights of Pandas.

Where Is Pandas Used?

Currently, there are fewer releases of the Pandas library, which includes hundreds of new features, bug fixes, enhancements, and changes in API. The improvements in Pandas are its ability to group and sort data, select the best-suited output for the applied method, and provide support for performing custom types operations.

Data Analysis, among everything else, takes the highlight when it comes to using Pandas. But when used with other libraries and tools, Pandas ensures high functionality and a good amount of flexibility.

That’s it, folks! I hope this article helped you kickstart your learning the libraries available in Python.