Was anyone able to generate consistent good results while using Llama3.1 for multihop generation questions.
I have been stuck on it for the past 5 days.
Everytime I write my prompt thinking that I got something stable, and then something weird happens and it just generates nonsense.
Have any of you faced this issues or were you able to use the model for that specific problem. What other way to generate such questions would you recommend as well as it is my final block in my architecture.
Very basic example of what I want to achieve (they go a bit more complex but this is just a FEW SHOT example for everyone):
Input: Who won the NBA finals back in 2016, 2017 and 2018?
Output: [Who won the NBA finals in 2016? , Who won the NBA finals in 2017? , Who won the NBA finals IN 2018]
submitted by /u/cedar_mountain_sea28
[link] [comments]