Suggested by: Omid Reza Abbasi, Johannes Scholz and Dominik Kowald (KnowCenter)
Keywords: Location Recommendation, LLMs, Fine-Tuning
Objective:
1. To investigate the application of LLM reasoning to enhance location-based recommender systems.
2. To explore the effectiveness of zero-shot CoT prompting for generating relevant location-based recommendations.
3. To compare the performance of the proposed LLM-based location-based RS approaches against traditional methods.
Short Description: In this research, the selected LLM will be trained on the preprocessed datasets using appropriate training and validation splits. We will select one or more pre-trained Large Language Models as the base for fine-tuning. The selection will consider factors such as model size, architecture, availability, and prior performance on text-based tasks. Potential candidates include models from the GPT family (e.g., GPT-2, GPT-Neo), open-source alternatives like Llama, or task-specific models. The chosen LLM(s) will be initialized with their pre-trained weights. For models with a fixed input sequence length, we will ensure that the constructed input prompts do not exceed this limit. If necessary, truncation or other techniques to handle long sequences will be explored. The fine-tuning process will involve training the selected LLM(s) on the prepared training data using appropriate training objectives. The specific objective will depend on the recommendation task being addressed (e.g., next POI prediction, rating prediction, recommendation generation).
Start: Anytime
Relevant Studies:
Lin, Xinyu, et al. "Data-efficient Fine-tuning for LLM-based Recommendation." Proceedings of the 47th international ACM SIGIR conference on research and development in information retrieval. 2024.
Bai, Zhuoxi, et al. "Finetuning Large Language Model for Personalized Ranking." arXiv preprint arXiv:2405.16127 (2024).
No comments:
Post a Comment