Testing Different Fibonacci Generator Techniques in Python

Hey all! So, yesterday I posted a practice solution for finding the 200,000th number in the Fibonacci sequence as quickly and efficiently as possible. That original post can be found here. I asked if anybody could offer some tips for improvement, and was greeted by a very good, very thorough answer from @rrampage detailing the use of generators rather than an iteration solution like I had. His counter-point was as follows:

Another interesting approach using iteration is making the function into a generator which can be used as part of for loops, list comprehensions, etc

def gen_fib():
    a,b = 1,1
    yield a
    yield b
    while True:
        a,b = b,a+b
        yield b
g = gen_fib()
# Generate the first 200,000 Fibonacci numbers fibs = [next(g) for _ in range(200000)] 

Enter fullscreen mode Exit fullscreen mode

As Python does not handle a lot of recursion depth very well, this seemed like a good solution in situations where you would need to leverage the CPU more in a low-RAM environment, something Generators are typically quite good at. But, after testing this solution which was simply an example and not meant for performance, then custom-rigging a solution that I thought would be ideally easy-on memory, I got some interesting results.

Here is the code I originally used that leverages storage in variables in the while-loop iteration:

def fibonacci(n):
    iterator = 1
    first_fib_num = 0
    second_fib_num = 1
    while iterator < n:
        added_to = first_fib_num + second_fib_num
        first_fib_num = second_fib_num
        second_fib_num = added_to
        iterator+=1
    return(added_to)

print(fibonacci(200000))

Enter fullscreen mode Exit fullscreen mode

I then ran this script against mprof for memory profiling over time. Below is the plot graph of its usage through its run:

Next, I used the suggested example of the same problem with generators which, again, was not really intended for performance. Its code is above, but I’ll put it here as well:

def gen_fib():
    a,b = 1,1
    yield a
    yield b
    while True:
        a,b = b,a+b
        yield b
g = gen_fib()
# Generate the first 200,000 Fibonacci numbers fibs = [next(g) for _ in range(200000)] 

Enter fullscreen mode Exit fullscreen mode

and below is the corresponding graph of usage over time:

WHAT?!? I was expecting something completely different! At the peak of this chart, right around 14.5 seconds, the generator function is using almost 2GB of memory! 2 GIGABYTES. As in more than 1000x more memory than the iterator function. AND IT TOOK LONGER TO DO IT by almost two seconds.

Please know also, that I am not criticising the solution provided by @rrampage because it was just an example of using a generator to accomplish this task. But a generator is supposed to rely less on memory. Don’t believe me, take a look at this sentence from an article on freecodecamp.org:

The main advantage of a generator over a list is that it takes much less memory.

Sure, maybe to store. But apparently not to use, which is kinda the whole point in my eyes really.

The final solution I had was to write my own generator based on a generator I found in the Python documentation and make it as efficient as possible for this purpose. The code is as follows:

def fibon(n):
    a = b = 1
    for i in range(n):
        yield a
        a, b = b, a + b

for x in fibon(200000):
    if x < 3:
        pass
print(x)

Enter fullscreen mode Exit fullscreen mode

And the results of that are represented as the graph below:

So, in conclusion, I used a language-specific feature that is pretty buzzy in Python because it is supposed to accomplish this kind of work better than a solution that is more “off-the-top-of-your-head”. Building this generator, thinking through it from a memory and performance standpoint, writing it, and profiling it took me about 30 minutes. My initial solution using a while loop and some variables took me about 5 minutes to come up with. Luckily, the buzzy solution was almost as fast and almost as light on memory, meaning that it would have been almost worth using if it didn’t take 6x longer to implement.

So, the moral of the story is that Occam’s Razor certainly applies in programming. As developers, we are charged with making things better in a timely and efficient fashion. Our job is to create solutions. Not to think about solutions, but discover, quantify, invent, build, deploy, and support solutions to problems. So much of our field is either doing what has yet to be done, or taking something proved possible in an academic environment, and making it useable by a non-technical majority. This ultimately means that we need to trust our instincts, trust our training, and do the best we can to go with the obvious solution unless that solution proves inadequate in practice.

Thank you so much to the folks that responded to my original post on this topic: @ben, @jess, @rrampage, and everybody else who handles I can’t recall right now. Without your discussion and comments, this kind of experiment would never happen.

And for those of you who think this post was about me trashing somebody else’s solution, please know that it was not. My own generator solution was not really any better despite spending a ton more time on it than an example submission would get, and testing/re-testing it to cut it down to be only a little bit worse than the 5-minute solution. My original generator solution used over 2GB of RAM. So there is no judgment from me. Generators have a lot of use cases where they shine. But this, despite my initial belief to the contrary, is just not one of them.

原文链接:Testing Different Fibonacci Generator Techniques in Python

© 版权声明
THE END
喜欢就支持一下吧
点赞9 分享
评论 抢沙发

请登录后发表评论

    暂无评论内容