Responsibilities
TikTok will be prioritizing applicants who have a current right to work in Singapore, and do not require TikTok sponsorship of a visa.
We are seeking a highly skilled and experienced Risk Analyst to join our dynamic Trust & Safety team. The successful candidate will play a crucial role in identifying, assessing, and mitigating risks related to harmful content, user abuse, fraud, and other threats to platform integrity. This role requires a strong analytical mindset, a deep understanding of online safety best practices, regulatory landscapes, and emerging technologies, with a focus on data-driven insights and proactive risk management.
About the Team
The T&S Risk Management & Integrity (RM&I) function is focused on addressing enterprise, operational and emerging risks across Trust & Safety, encompassing a broad continuum of risks across policy, process, systems, platform, people, and regulatory domains.
Want more jobs like this?
Get jobs in Singapore delivered to your inbox every week.
RM&I works constantly to identify strategic priorities, opportunities, and pain points in the organization's risk management landscape, providing data-based insights and business-centric solutions. Together with cross-functional stakeholders and partners, we build greater efficacy and consistency in T&S's risk management capabilities, and promote global synergy whilst enabling localized decision-making.
Responsibilities:
- Process Risk Assessments and Controls
You will support internal teams in developing and maintaining a robust system of internal controls to mitigate risks related to harmful content, user abuse, fraud, and other threats to platform integrity. This would primarily entail conducting regular risk assessments, identifying and evaluating potential threats and vulnerabilities using qualitative and quantitative techniques, as well as analyzing underlying data to identify trends and patterns in risk events i.e. perform risk analytics. You are also expected to develop and contribute to the delivery of risk-control assessment training programs for identified front-line employees e.g. Risk Champions, in tandem with other risk awareness-building activities.
- Regulatory Compliance and Optimization
You will be responsible for the analysis and interpretation of relevant laws, regulations, and industry best practices related to online safety, data privacy, and content moderation. Moreover, you will need to monitor regulatory changes regularly and provide recommendations for adapting risk management strategies, such as identifying areas for optimization and efficiency improvements using innovative and/or automated tools and mechanisms to improve our risk monitoring, intake and mitigation capabilities.
- Risk Frameworks and Taxonomies
You will contribute to the development, implementation, and maintenance of a comprehensive risk framework and taxonomy, specifically tailored to the Trust & Safety domain. This includes supporting the identification of key risk indicators (KRIs) and the development of reporting mechanisms, as well as analyzing data using AI and data analytics to identify emerging threats and trends and updating the risk register accordingly for risk prevention and detection purposes.
- AI Governance in Trust & Safety
You will support the development and implementation of a framework for the ethical and responsible use of AI in Trust & Safety. This includes assessing AI-related risks, contributing to the development of guidelines for data privacy and security, as well as monitoring the performance of AI-powered moderation tools to identify areas for improvement within the internal control environment.
- Collaboration and Partnerships
You will need to collaborate with cross-functional teams (e.g., legal, engineering, product, data science) and external partners to share best practices, identify emerging threats and develop mitigation strategies. Additionally, you may be assigned to contribute to joint projects and research initiatives with various functional experts to support the development and improvement of internal controls and processes.
Qualifications
Minimum Qualifications:
- Bachelor's degree in a relevant field (e.g., computer science, information security, risk management). A relevant professional certification (e.g., CRISC, CISM, CISSP) is highly desirable.
- Minimum of 5 years of experience in risk management, ideally within a Trust & Safety or online platform environment. Experience with content moderation or online safety is a plus though not mandatory.
- Strong understanding of risk assessment methodologies, including qualitative and quantitative techniques.
- Deep knowledge of relevant regulations and compliance requirements related to online safety and data privacy (e.g., GDPR, CCPA, AI Risk Governance).
- Experience in developing and implementing risk frameworks and taxonomies within a Trust & Safety context, with experience in AI governance and ethical considerations related to AI in content moderation being highly desirable.
- Excellent communication, presentation, and interpersonal skills. Ability to clearly communicate complex technical and risk issues to a non-technical audience.
- Proficiency in data extraction and analysis tools (e.g., SQL, Python, R).
Preferred Qualifications:
- Direct exposure to key industry regulations or legislation such as the Digital Services Act, Digital Markets Act, GDPR, AI Act, etc.
- Experience with data analytics, machine learning, conversational and/or generative AI technologies would be a plus.