Multiplication is an expensive operation. If your code has to perform multiplication thousands or say, hundreds of thousands of times, your code will be slow. Especially in a dynamically typed language like Python if you are just using the vanilla language to perform said computation.

After trial and error, trying hard to speed up my code, I stumbled upon certain methods available in numpy, which helped in more than doubling my execution speed. I won’t talk about the actual use case for it, but let me illustrate with an example. Both my actual use case as well as this example talk only about multiplying multi-dimensional vectors efficiently.

Assume that you have two 100-dimensional vectors that you want to multiply component-wise. How does the execution time change with the number of times you perform the computation? How does traditional Python compare against Numpy’s *dot* method? The code for this experiment is available here.

As you can see in the attached image, the speed up with using *numpy.array* and *numpy.dot* is up to 22 times. The y-axis of the graph represents time taken in seconds, and the x-axis depicts (in 1000s) the number of times the two vectors were multiplied component-wise. The red line represents traditional Python and the green line represents *numpy.dot*. Of course, numpy is using C to achieve this, but kudos to the numpy

Python vs numpy for multiplying vectors

developers for making lives easier for those who only want to use Python (or do not want to reinvent the wheel, or do not want to make terrible mistakes in something that is otherwise taken for granted).

### Like this:

Like Loading...

*Related*

Gud one dude 🙂