Is there any real issue behind setting the recursion limit to be very high?

I was previously getting an error whereby if I use Python’s multiprocessing and Beautiful Soup to parse a HTML page I would sometimes get a max recursion depth exceeded error or sometimes it would be a EOFError: Ran out of input error depending on how high I set the recursion limit using the following call sys.setrecursionlimit() I have already asked a question about it on this thread

I have currently set the recursion limit to 100 000 instead of the default (which is 1000) and my script seems to be quite stable. I do have to point out that I had previously had issues with the script and then set the limit to 25000, which worked fine for a few weeks, however, recently I started getting issues again, so I set the limit to 100 000 and all seems fine.

However, the top answer on this thread had the following to say about setting the maximum recursion depth to a high number:

So, one possibility is to just do sys.setrecursionlimit(25000). That will solve the problem for this exact page, but a slightly different page might need even more than that. (Plus, it’s usually not a great idea to set the recursion limit that high—not so much because of the wasted memory, but because it means actual infinite recursion takes 25x as long, and 25x as much wasted resources, to detect.)

I want to understand if this is an issue when parsing a tree using beautiful soup or if having "infinite" recursion is not something that can typically be an issue for beautiful soup trees.

Source: Python Questions

LEAVE A COMMENT