Should data feed into Universal Sentence Encoder be normalized?

  artificial-intelligence, nlp, python, tensorflow

I am currently working with Tensor Flow’s Universal Sentence Encoder ( for my B.Sc. thesis where I study extractive summarisation techniques.
In the vast majority of techniques for this task (like, the sentences are first normalized (lowercasing, stop word removal, lemmantisation), but I couldn’t find a hint whether sentences feed into the USE should first be normalized. Is that the case? Does is matter?

Source: Python Questions