TikTok
Researcher, Large Language Models, TikTok Trust and Safety
Job Description
We are looking for researchers in the Large Language Model (LLM) domain who are going to conduct research on single modality and multi modality LLM pretraining and applications including in-context learning (ICL), supervised fine-tuning (SFT) and reinforcement learning based alignment. We are looking forward to applying LLM in trust and safety business scenarios so that we can protect our users and creators with the best moderation quality and cost efficiency. There are no doubt a lot of unsolved problems in the LLM domain which could have a huge impact on industry and academia. In the Trust & Safety team, we have real applications, resources and patience for technology incubation.
Your main responsibilities will be
– Lead the incubation of next-generation, high-capacity LLM solutions for the Trust & Safety business
– Identify research problems and dive deep for innovative solutions
– Work closely with cross-functional teams to plan and implement projects harnessing LLMs for diverse purposes and vertical domains
– Extend the insights and impact from industry to academia