Data Scientist 資料科學家 - TITANSOFT 新加坡商鈦坦科技股份有限公司|Meet.jobs
Salary
50k - 150k TWD Monthly
Required skills
Data Science
,
machine learning
,
Python
,
SQL
,
Google Analytics
,
Google Adwords
Job description
Job Description
Work with product owners to identify opportunities for leveraging company data to drive business solutions.
Data mining and analyzing from company databases to optimize and improve product development, marketing techniques and business strategies.
Assess the effectiveness and accuracy of new data sources and data gathering techniques.
Develop data models and algorithms to apply to data sets.
Use predictive modeling to optimize customer experiences, ad targeting and other business outcomes.
A/B testing and test model's quality.
Coordinate with different functional teams to implement models.
Develop processes and tools to monitor and analyze model performance and data accuracy.
Requirement
5 years + experiences in manipulating data sets and building statistical models, or has Master’s or PHD degree in Statistics, Mathematics, Computer Science or another quantitative field.
Strong problem solving skills.
Excellent written and verbal communication skills for coordinating across teams.
Willing to learn new technologies and techniques.
Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their advantages/drawbacks.
Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.)
Experienced in querying databases and using statistical computer languages: R, Python, SQL, etc.
Experienced in at least one tool of analyzing data from 3rd parties: Google Analytics, Site Catalyst, Coremetrics, Adwords, Crimson Hexagon, Facebook Insights, etc.
Plus
Coding knowledge and experienced with several languages: C#, .Net, JavaScript, etc.
Experienced in creating and working with data architectures.
Experienced in visualizing/presenting data, e.g. D3, ggplote, etc.
Experienced in using distributed data/computing tools, e.g. Map/Reduce, Hadoop, Hive, Spark, etc.