79452399

Date: 2025-02-19 18:27:03
Score: 1
Natty:
Report link

idk if it helps or not, but i know that rnn and lstm require a fixed size input to process and move forward. we use ragged tensors bcoz it helps us to not change our data. most of the times we use embeddings along with rnn or lstm. now dk where exactly, but in one of these layers, some implicit padding is happening (i.e., even though shapes are different at the tart, but to process rnn/lstm e need same shape input and so tensor do some padding by itself, some intrnal mechanism of sorts) what i am trying to say is if you do not wish to use embeddings then you can try padding it yourself, you wouldn't need ragged tensor then, you can simply use normal tensor. or you can use embeddings. hope it was helpful to anyone, and i really hope that my explaination is correct, if it's wrong then pls lmk, i will take my answer down. thanks

Reasons:
  • Blacklisted phrase (0.5): thanks
  • Blacklisted phrase (1): i am trying to
  • Whitelisted phrase (-1.5): you can use
  • Long answer (-0.5):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Shubham khatter