library(RMariaDB)
library(DBI)
library(bcputility)
library(tm)
library(dplyr)
library(stringr)
library(tidytext)
#DB Set up
con <- dbConnect(RMariaDB::MariaDB(), username="root", password="TestCase123.", dbname ="basic", host="localhost")
dbListTables(con)
## [1] "Rectified_Glassdoor" "Rectified_Cleaned_DS_Jobs"
## [3] "data_science_job_posting" "Cleaned_DS_Jobs"
## [5] "Glassdoor USA Dataset" "Rectified_Total"
## [7] "Rectified_data_science_job_postings" "listings2019_2022"
## [9] "Rectified_Listings"
df <- dbReadTable(con, "Rectified_Total")
sub_length = 100
new_df <- data.frame( doc_id = 1:sub_length, text =df$Job_Description[1:sub_length] )
(ds <- DataframeSource(new_df))
## $encoding
## [1] ""
##
## $length
## [1] 100
##
## $position
## [1] 0
##
## $reader
## function (elem, language, id)
## {
## PlainTextDocument(elem$content[, "text"], id = elem$content[,
## "doc_id"], language = language)
## }
## <bytecode: 0x00000000274f6cf0>
## <environment: namespace:tm>
##
## $content
## doc_id
## 1 1
## 2 2
## 3 3
## 4 4
## 5 5
## 6 6
## 7 7
## 8 8
## 9 9
## 10 10
## 11 11
## 12 12
## 13 13
## 14 14
## 15 15
## 16 16
## 17 17
## 18 18
## 19 19
## 20 20
## 21 21
## 22 22
## 23 23
## 24 24
## 25 25
## 26 26
## 27 27
## 28 28
## 29 29
## 30 30
## 31 31
## 32 32
## 33 33
## 34 34
## 35 35
## 36 36
## 37 37
## 38 38
## 39 39
## 40 40
## 41 41
## 42 42
## 43 43
## 44 44
## 45 45
## 46 46
## 47 47
## 48 48
## 49 49
## 50 50
## 51 51
## 52 52
## 53 53
## 54 54
## 55 55
## 56 56
## 57 57
## 58 58
## 59 59
## 60 60
## 61 61
## 62 62
## 63 63
## 64 64
## 65 65
## 66 66
## 67 67
## 68 68
## 69 69
## 70 70
## 71 71
## 72 72
## 73 73
## 74 74
## 75 75
## 76 76
## 77 77
## 78 78
## 79 79
## 80 80
## 81 81
## 82 82
## 83 83
## 84 84
## 85 85
## 86 86
## 87 87
## 88 88
## 89 89
## 90 90
## 91 91
## 92 92
## 93 93
## 94 94
## 95 95
## 96 96
## 97 97
## 98 98
## 99 99
## 100 100
## text
## 1 Description\n\nThe Senior Data Scientist is responsible for defining, building, and improving statistical models to improve business processes and outcomes in one or more healthcare domains such as Clinical, Enrollment, Claims, and Finance. As part of the broader analytics team, Data Scientist will gather and analyze data to solve and address complex business problems and evaluate scenarios to make predictions on future outcomes and work with the business to communicate and support decision-making. This position requires strong analytical skills and experience in analytic methods including multivariate regressions, hierarchical linear models, regression trees, clustering methods and other complex statistical techniques.\n\nDuties & Responsibilities:\n\nâ\200¢ Develops advanced statistical models to predict, quantify or forecast various operational and performance metrics in multiple healthcare domains\nâ\200¢ Investigates, recommends, and initiates acquisition of new data resources from internal and external sources\nâ\200¢ Works with multiple teams to support data collection, integration, and retention requirements based on business needs\nâ\200¢ Identifies critical and emerging technologies that will support and extend quantitative analytic capabilities\nâ\200¢ Collaborates with business subject matter experts to select relevant sources of information\nâ\200¢ Develops expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis and predictive modeling, graph theory, recommender systems, text analytics and validation\nâ\200¢ Develops expertise with Healthfirst datasets, data repositories, and data movement processes\nâ\200¢ Assists on projects/requests and may lead specific tasks within the project scope\nâ\200¢ Prepares and manipulates data for use in development of statistical models\nâ\200¢ Other duties as assigned\n\nMinimum Qualifications:\n\n-Bachelor's Degree\n\nPreferred Qualifications:\n\n- Masterâ\200\231s degree in Computer Science or Statistics\nFamiliarity with major cloud platforms such as AWS and Azure\nHealthcare Industry Experience\n\nMinimum Qualifications:\n\n-Bachelor's Degree\n\nPreferred Qualifications:\n\n- Masterâ\200\231s degree in Computer Science or Statistics\nFamiliarity with major cloud platforms such as AWS and Azure\nHealthcare Industry Experience\n\nWE ARE AN EQUAL OPPORTUNITY EMPLOYER. Applicants and employees are considered for positions and are evaluated without regard to mental or physical disability, race, color, religion, gender, national origin, age, genetic information, military or veteran status, sexual orientation, marital status or any other protected Federal, State/Province or Local status unrelated to the performance of the work involved.\n\nIf you have a disability under the Americans with Disability Act or a similar law, and want a reasonable accommodation to assist with your job search or application for employment, please contact us by sending an email to careers@Healthfirst.org or calling 212-519-1798 . In your email please include a description of the accommodation you are requesting and a description of the position for which you are applying. Only reasonable accommodation requests related to applying for a position within Healthfirst Management Services will be reviewed at the e-mail address and phone number supplied. Thank you for considering a career with Healthfirst Management Services.\nEEO Law Poster and Supplement\n\n]]>
## 2 Overview\n\n\nAnalysis Group is one of the largest international economics consulting firms, with more than 1,000 professionals across 14 offices in North America, Europe, and Asia. Since 1981, we have provided expertise in economics, finance, health care analytics, and strategy to top law firms, Fortune Global 500 companies, and government agencies worldwide. Our internal experts, together with our network of affiliated experts from academia, industry, and government, offer our clients exceptional breadth and depth of expertise.\n\nWe are currently seeking a Data Scientist to join our team. The ideal candidate should be passionate about working on cutting edge research and analytical services for Fortune 500 companies, global pharma/biotech firms and leaders in industries such as finance, energy and life sciences. The Data Scientist will be a contributing member to client engagements and have the opportunity to work with our network of world-class experts and thought leaders.\n\nJob Functions and Responsibilities\n\nThe candidate Data Scientist will help develop, maintain and teach new tools and methodologies related to data science and high performance computing. This position will also help Analysis Group in maintaining our leadership position in terms of advancing methodology and data analytics. The Data Scientist will be responsible for staying abreast of new developments in technology relating to data science, to share more broadly with Analysis Group.\n\nKey responsibilities for this position will include:\nWorking with project teams to address data science/computing challenges\nIdentifying opportunities for technology to enhance service offerings\nActing as a resource and participating in client engagements and research as part of the project team\nMaintaining up-to-date knowledge of computing tools, providing technical training and helping to grow the in-house knowledge base, specifically in a Linux environment\nPresenting research at selected conferences\nExamples of activities for the Data Scientist will include:\nDeveloping data engineering and machine learning production systems for full stack data science projects\nUsing natural language processing methodologies to work with EMR data, social media data and other unstructured data\nOptimizing procedures for managing and accessing large databases (e.g., insurance claims, electronic health records, financial transactions)\nCreating interactive analytics portals and data visualizations (e.g., using R/Shiny, Python/Flask, D3)\nBuilding and maintaining high performance computing (HPC) tools on grid and cloud computing environments\nDeveloping and reviewing software and packages in R, Python and other Object Oriented Languages\nEstablishing optimized procedures for repetitive or computationally intensive tasks (C, C++, Cuda-C)\nQualifications\nStrong credentials and experience in database management and data visualization\nSignificant experience working within a Linux environment required\nBackground in Statistics/Econometrics or Biostatistics\nIdeally PhD in Computer Science, Mathematics, Statistics, Economics or other relevant scientific degree with relevant experience. Other candidates with at least one year of experience in the field may also be considered\nExcellent written and verbal communication skills\nProject experience with R and/or Python\nFamiliar with online/cloud computing/storage (e.g., AWS)\nDemonstrated experience working on project teams and collaborating with others\nSCIENTIFIQUE DES DONNÉES\n\n*Lâ\200\231utilisation du genre masculin sert uniquement à alléger le texte et est utilisé ici en tant que genre neutre\n\nSurvol\n\nGroupe dâ\200\231analyse ltée est lâ\200\231une des plus grandes firmes de services-conseils en économie, comptant plus de 950 professionnels répartis dans 14 bureaux en Amérique du Nord, en Europe et en Asie. Depuis 1981, nous offrons notre expertise en matière de stratégie, dâ\200\231économie, de finance et dâ\200\231analyse dans le domaine des soins de santé aux grands cabinets dâ\200\231avocats, aux sociétés Fortune Global 500 et aux agences gouvernementales du monde entier. Nos professionnels en poste conjugués à notre réseau de spécialistes affiliés issus dâ\200\231universités, dâ\200\231industries spécifiques et dâ\200\231organismes gouvernementaux procurent à notre clientèle un savoir-faire dâ\200\231une portée et dâ\200\231une profondeur exceptionnelles.\n\nNous sommes présentement à la recherche d'un Scientifique des données (« Data Scientist ») pour se joindre à notre équipe. Le candidat idéal devrait être passionné par la recherche de pointe et les services analytiques pour les entreprises Fortune 500, les entreprises pharmaceutiques et biotechnologiques mondiales et les chefs de file dans des secteurs de la finance, l'énergie et les sciences de la vie. Le Scientifique des données sera un membre contributeur aux mandats des clients et aura l'occasion de travailler avec notre réseau d'experts et de leaders d'opinion de classe mondiale.\n\nDescription du poste et des responsabilités\n\nLe scientifique des données aidera à développer, maintenir et enseigner de nouveaux outils et méthodologies liés à la science des données (« Data Science ») et au HPC. Ce poste aidera également le Groupe d'analyse à maintenir sa position de chef de file en ce qui a trait à l'avancement de la méthodologie et de l'analyse des données. Le scientifique des données sera chargé de se tenir au courant des nouveaux développements technologiques liés à la science des données, afin de les partager plus largement avec le Groupe d'analyse.\n\nLes principales responsabilités de ce poste comprendront:\n\n- Collaborer avec les consultants pour relever les défis de la science des données et de sciences informatiques\n\n- Agir à titre de ressource et participer aux mandats et à la recherche en tant que membre de l'équipe de projet\n\n- Maintenir à jour les connaissances sur les outils informatiques, fournir une formation technique et aider à développer la base de connaissances interne, notamment dans un environnement Linux\n\n- Présenter la recherche à des conférences choisies\n\nExemples de tâches du scientifique des données :\n\n- Développement de systèmes de production en ingénierie des données ainsi quâ\200\231en apprentissage machine pour des projets de science des données full stack\n\n- Utiliser des méthodologies NLP pour travailler avec les données médicales électroniques, les données des médias sociaux et d'autres données non structures\n\n- Optimiser les procédures de gestion et d'accès aux grandes bases de données (ex. réclamations d'assurance, dossiers de santé électroniques, transactions financières)\n\n- Création de portails d'analyse interactifs et de visualisations de données (par exemple, en utilisant R/Shiny, Python/Flask, D3)\n\n- Construire et maintenir des outils de calcul de haute performance (HPC).\n\n- Développement et révision de codes en R, Python et autres langages\n\n- Mise en place de procédures optimisées pour les tâches répétitives ou intensives en calcul (C, C++, Cuda-C)\n\nQualifications requises\n\n- Solides références et expérience dans la gestion de bases de données et de la visualisation de données\n\n- Expérience de travail significative dans un environnement Linux requise\n\n- Expérience antérieure en statistique/économétrie ou bio-statistique\n\n- Idéalement, être titulaire d'un doctorat en sciences informatiques, en mathématiques, en statistique, en économie ou d'un autre diplôme scientifique pertinent et posséder une expérience pertinente. Les candidats ayant au moins un an d'expérience dans le domaine peuvent également être considérés.\n\n- Excellentes aptitudes de communication écrite et verbale\n\n- Expérience de projet avec R et/ou Python\n\n- Familiarité avec l'informatique en ligne/info nuagique et le stockage (AWS)\n\n- Expérience de travail démontrée au sein d'équipes de projet et de collaboration avec d'autres personnes\n\nÂ\nEqual Opportunity Employer/Protected Veterans/Individuals with Disabilities.\nPlease view Equal Employment Opportunity Posters provided by OFCCP here.\nThe contractor will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor's legal duty to furnish information. 41 CFR 60-1.35(c)
## 3 Data Scientist\nAffinity Solutions / Marketing Cloud seeks smart, curious, technically savvy candidates to join our cutting-edge data science team. We hire the best and brightest and give them the opportunity to work on industry-leading technologies.\nThe data sciences team at AFS/Marketing Cloud build models, machine learning algorithms that power all our ad-tech/mar-tech products at scale, develop methodology and tools to precisely and effectively measure market campaign effects, and research in-house and public data sources for consumer spend behavior insights. In this role, you'll have the opportunity to come up with new ideas and solutions that will lead to improvement of our ability to target the right audience, derive insights and provide better measurement methodology for marketing campaigns. You'll access our core data asset and machine learning infrastructure to power your ideas.\nDuties and Responsibilities\n· Support all clients model building needs, including maintaining and improving current modeling/scoring methodology and processes,\n· Provide innovative solutions to customized modeling/scoring/targeting with appropriate ML/statistical tools,\n· Provide analytical/statistical support such as marketing test design, projection, campaign measurement, market insights to clients and stakeholders.\n· Mine large consumer datasets in the cloud environment to support ad hoc business and statistical analysis,\n· Develop and Improve automation capabilities to enable customized delivery of the analytical products to clients,\n· Communicate the methodologies and the results to the management, clients and none technical stakeholders.\nBasic Qualifications\n· Advanced degree in Statistics/Mathematics/Computer Science/Economics or other fields that requires advanced training in data analytics.\n· Being able to apply basic statistical/ML concepts and reasoning to address and solve business problems such as targeting, test design, KPI projection and performance measurement.\n· Entrepreneurial, highly self-motivated, collaborative, keen attention to detail, willingness and capable learn quickly, and ability to effectively prioritize and execute tasks in a high pressure environment.\n· Being flexible to accept different task assignments and able to work on a tight time schedule.\n· Excellent command of one or more programming languages; preferably Python, SAS or R\n· Familiar with one of the database technologies such as PostgreSQL, MySQL, can write basic SQL queries\n· Great communication skills (verbal, written and presentation)\nPreferred Qualifications\n· Experience or exposure to large consumer and/or demographic data sets.\n· Familiarity with data manipulation and cleaning routines and techniques.
## 4 Intuit is seeking a Staff Data Scientist to cover Intuits Consumer Group (TurboTax, Mint, Turbo). We have an exciting opportunity to help shape how we use data to generate hypotheses, surface insights, and build models in order to personalize customer experiences and provide awesome outcomes for the business and our customers. This role will partner closely with data engineering, data analytics, data science, marketing managers, and product management.
## 5 Responsibilities\n\n\nThe Medical Laboratory Scientist is responsible for obtaining or receiving specimens, performing clinical laboratory tests by standardized procedures, understanding method principles, performing quality control, performing preventive maintenance, and interpreting results accurately, acceptably and within critical limits. The MLS works independently, organizing work to meet established deadlines and validates all data obtained.\n\nQualifications\nBachelor's Degree: Baccalaureate Degree in Medical Technology, Clinical Laboratory Science, Medical Laboratory Science or related science (Required)\nMinimum of 1 year of experience\nCertification and Licensure Requirements. MLS (ASCP) (Preferred)\n3p-1130p\n2-Evening
## 6 Secure our Nation, Ignite your Future\n\nJoin the top Information Technology and Analytic professionals in the industry to make invaluable contributions to our national security on a daily basis. In this innovative, self-contained, Big Data environment, the ManTech team is responsible for everything from infrastructure, to application development, to data science, to advanced analytics and beyond. The team is diverse, the questions are thought-provoking, and the opportunities for growth and advancement are numerous\n\nThe successful candidate will possess a diverse range of data-focused skills and experience, both technical and analytical. They will have a strong desire and capability for problem solving, data analysis and troubleshooting, analytical thinking, and experimentation.\n\nDuties, Tasks & Responsibilities\nWorking with large, complex, and disparate data sets\nDesigning and implementing innovative ways to analyze and exploit the Sponsors data holdings\nResearching and reporting on a wide variety of Sponsor inquiries\nRaising proactive inquiries to the Sponsor based on observations and proposed data analysis/exploitation\nSolving difficult, non-routine problems by applying advanced analytical methodologies, and improving analytic methodologies\nDeveloping custom searches\nCommunicating and coordinating with internal and external partners as needed\nRequired Experience, Skills, & Technologies\n\nThorough knowledge of appropriate analytic tools and methodologies in one or more of the following:\nApplied mathematics (e.g. probability and statistics, formal modeling, computational social sciences)\nComputer programming (e.g. programming languages, math/statistics packages, computer science, machine learning, scientific computing)\nAbility to code or script in one or more general programming language\nExperience with and theoretical understanding of algorithms for classification, regression, clustering, and anomaly detection\nKnowledge of relational databases, including SQL and large-scale distributed systems (e.g. Hadoop)\nExpertise with statistical data analysis (e.g. linear models, multivariate analysis, stochastic models, sampling methods)\nDemonstrated effectiveness in collecting information and accurately representing/visualizing it to non-technical third parties\nTS/SCI with Polygraph\nBachelor of Science or equivalent and 12-15 years related experience, but will consider all levels of experience.\nDesired Experience, Skills & Technologies\nPrevious investigative experience using a combination of technical and analytic skills\n#LI-DU1\n\nManTech International Corporation, as well as its subsidiaries proactively fulfills its role as an equal opportunity employer. We do not discriminate against any employee or applicant for employment because of race, color, sex, religion, age, sexual orientation, gender identity and expression, national origin, marital status, physical or mental disability, status as a Disabled Veteran, Recently Separated Veteran, Active Duty Wartime or Campaign Badge Veteran, Armed Forces Services Medal, or any other characteristic protected by law.\n\nIf you require a reasonable accommodation to apply for a position with ManTech through its online applicant system, please contact ManTech's Corporate EEO Department at (703) 218-6000. ManTech is an affirmative action/equal opportunity employer - minorities, females, disabled and protected veterans are urged to apply. ManTech's utilization of any external recruitment or job placement agency is predicated upon its full compliance with our equal opportunity/affirmative action policies. ManTech does not accept resumes from unsolicited recruiting firms. We pay no fees for unsolicited services.\n\nIf you are a qualified individual with a disability or a disabled veteran, you have the right to request an accommodation if you are unable or limited in your ability to use or access http://www.mantech.com/careers/Pages/careers.aspx as a result of your disability. To request an accommodation please click careers@mantech.com and provide your name and contact information.
## 7 Introduction\n\nHave you always wanted to run experiments on a global fleet of consumer robots? iRobot is looking for a data scientist to help manage a/b tests related to robot software, track performance KPIs, and share results with software teams. A strong candidate has experience in experimental design, hypothesis testing, and statistics, in addition to strong SQL and Python skills.\n\nWhat You Will Do:\nGuide software and product management teams in designing rigorous experiments to test the impact of new software versions.\nDashboarding to monitor results live from the fleet.\nTest hypotheses about software performance using appropriate statistical methods.\nDevelop metrics to track software performance.\nUphold scientific best practices in experimental design and cohort selection.\nContribute to a management system for cohort selection.\nTo Be Successful You Will Have:\nAt least 1 year of relevant job, internship, or co-op experience. M.S. in computer science, statistics, mathematics, physics, chemistry, biology, or social sciences is a plus.\nVery detail-oriented and eager to get your hands dirty in the data.\nProficient in Python, SQL, and data visualization, with experience using pandas, numpy, sklearn, scipy, matplotlib, or plotly.\nExperienced in designing experiments, making and testing hypotheses, and identifying the appropriate metrics to answer a question.\nSavvy in statistical methods and able to select the right test for each situation.\nCan balance business considerations with the need for scientific rigor.\nSelf-motivated and comfortable working in a fast-paced, delivery-focused environment.\nSkilled communicator who can make the complicated seem simple.\nPassionate about using data to drive decision-making!
## 8 Overview\n\n\nAnalysis Group is one of the largest international economics consulting firms, with more than 1,000 professionals across 14 offices in North America, Europe, and Asia. Since 1981, we have provided expertise in economics, finance, health care analytics, and strategy to top law firms, Fortune Global 500 companies, and government agencies worldwide. Our internal experts, together with our network of affiliated experts from academia, industry, and government, offer our clients exceptional breadth and depth of expertise.\n\nWe are currently seeking a Data Scientist to join our team. The ideal candidate should be passionate about working on cutting edge research and analytical services for Fortune 500 companies, global pharma/biotech firms and leaders in industries such as finance, energy and life sciences. The Data Scientist will be a contributing member to client engagements and have the opportunity to work with our network of world-class experts and thought leaders.\n\nJob Functions and Responsibilities\n\nThe candidate Data Scientist will help develop, maintain and teach new tools and methodologies related to data science and high performance computing. This position will also help Analysis Group in maintaining our leadership position in terms of advancing methodology and data analytics. The Data Scientist will be responsible for staying abreast of new developments in technology relating to data science, to share more broadly with Analysis Group.\n\nKey responsibilities for this position will include:\nWorking with project teams to address data science/computing challenges\nIdentifying opportunities for technology to enhance service offerings\nActing as a resource and participating in client engagements and research as part of the project team\nMaintaining up-to-date knowledge of computing tools, providing technical training and helping to grow the in-house knowledge base, specifically in a Linux environment\nPresenting research at selected conferences\nExamples of activities for the Data Scientist will include:\nDeveloping data engineering and machine learning production systems for full stack data science projects\nUsing natural language processing methodologies to work with EMR data, social media data and other unstructured data\nOptimizing procedures for managing and accessing large databases (e.g., insurance claims, electronic health records, financial transactions)\nCreating interactive analytics portals and data visualizations (e.g., using R/Shiny, Python/Flask, D3)\nBuilding and maintaining high performance computing (HPC) tools on grid and cloud computing environments\nDeveloping and reviewing software and packages in R, Python and other Object Oriented Languages\nEstablishing optimized procedures for repetitive or computationally intensive tasks (C, C++, Cuda-C)\nQualifications\nStrong credentials and experience in database management and data visualization\nSignificant experience working within a Linux environment required\nBackground in Statistics/Econometrics or Biostatistics\nIdeally PhD in Computer Science, Mathematics, Statistics, Economics or other relevant scientific degree with relevant experience. Other candidates with at least one year of experience in the field may also be considered\nExcellent written and verbal communication skills\nProject experience with R and/or Python\nFamiliar with online/cloud computing/storage (e.g., AWS)\nDemonstrated experience working on project teams and collaborating with others\nSCIENTIFIQUE DES DONNÉES\n\n*Lâ\200\231utilisation du genre masculin sert uniquement à alléger le texte et est utilisé ici en tant que genre neutre\n\nSurvol\n\nGroupe dâ\200\231analyse ltée est lâ\200\231une des plus grandes firmes de services-conseils en économie, comptant plus de 950 professionnels répartis dans 14 bureaux en Amérique du Nord, en Europe et en Asie. Depuis 1981, nous offrons notre expertise en matière de stratégie, dâ\200\231économie, de finance et dâ\200\231analyse dans le domaine des soins de santé aux grands cabinets dâ\200\231avocats, aux sociétés Fortune Global 500 et aux agences gouvernementales du monde entier. Nos professionnels en poste conjugués à notre réseau de spécialistes affiliés issus dâ\200\231universités, dâ\200\231industries spécifiques et dâ\200\231organismes gouvernementaux procurent à notre clientèle un savoir-faire dâ\200\231une portée et dâ\200\231une profondeur exceptionnelles.\n\nNous sommes présentement à la recherche d'un Scientifique des données (« Data Scientist ») pour se joindre à notre équipe. Le candidat idéal devrait être passionné par la recherche de pointe et les services analytiques pour les entreprises Fortune 500, les entreprises pharmaceutiques et biotechnologiques mondiales et les chefs de file dans des secteurs de la finance, l'énergie et les sciences de la vie. Le Scientifique des données sera un membre contributeur aux mandats des clients et aura l'occasion de travailler avec notre réseau d'experts et de leaders d'opinion de classe mondiale.\n\nDescription du poste et des responsabilités\n\nLe scientifique des données aidera à développer, maintenir et enseigner de nouveaux outils et méthodologies liés à la science des données (« Data Science ») et au HPC. Ce poste aidera également le Groupe d'analyse à maintenir sa position de chef de file en ce qui a trait à l'avancement de la méthodologie et de l'analyse des données. Le scientifique des données sera chargé de se tenir au courant des nouveaux développements technologiques liés à la science des données, afin de les partager plus largement avec le Groupe d'analyse.\n\nLes principales responsabilités de ce poste comprendront:\n\n- Collaborer avec les consultants pour relever les défis de la science des données et de sciences informatiques\n\n- Agir à titre de ressource et participer aux mandats et à la recherche en tant que membre de l'équipe de projet\n\n- Maintenir à jour les connaissances sur les outils informatiques, fournir une formation technique et aider à développer la base de connaissances interne, notamment dans un environnement Linux\n\n- Présenter la recherche à des conférences choisies\n\nExemples de tâches du scientifique des données :\n\n- Développement de systèmes de production en ingénierie des données ainsi quâ\200\231en apprentissage machine pour des projets de science des données full stack\n\n- Utiliser des méthodologies NLP pour travailler avec les données médicales électroniques, les données des médias sociaux et d'autres données non structures\n\n- Optimiser les procédures de gestion et d'accès aux grandes bases de données (ex. réclamations d'assurance, dossiers de santé électroniques, transactions financières)\n\n- Création de portails d'analyse interactifs et de visualisations de données (par exemple, en utilisant R/Shiny, Python/Flask, D3)\n\n- Construire et maintenir des outils de calcul de haute performance (HPC).\n\n- Développement et révision de codes en R, Python et autres langages\n\n- Mise en place de procédures optimisées pour les tâches répétitives ou intensives en calcul (C, C++, Cuda-C)\n\nQualifications requises\n\n- Solides références et expérience dans la gestion de bases de données et de la visualisation de données\n\n- Expérience de travail significative dans un environnement Linux requise\n\n- Expérience antérieure en statistique/économétrie ou bio-statistique\n\n- Idéalement, être titulaire d'un doctorat en sciences informatiques, en mathématiques, en statistique, en économie ou d'un autre diplôme scientifique pertinent et posséder une expérience pertinente. Les candidats ayant au moins un an d'expérience dans le domaine peuvent également être considérés.\n\n- Excellentes aptitudes de communication écrite et verbale\n\n- Expérience de projet avec R et/ou Python\n\n- Familiarité avec l'informatique en ligne/info nuagique et le stockage (AWS)\n\n- Expérience de travail démontrée au sein d'équipes de projet et de collaboration avec d'autres personnes\n\nÂ\nEqual Opportunity Employer/Protected Veterans/Individuals with Disabilities.\nPlease view Equal Employment Opportunity Posters provided by OFCCP here.\nThe contractor will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor's legal duty to furnish information. 41 CFR 60-1.35(c)
## 9 IZEA was built to connect the worldâ\200\231s top brands with influential content creators and publishers to enable influencer marketing and content production at scale. With over 500,000 Creators reaching over 3 billion fans and followers around the globe, IZEA is unmatched in its industry experience, network diversity, and technology ecosystem. A career at IZEA offers countless ways to make an impact in a fast-growing organization!\n\nIZEA is looking for a Data Engineer to join our Core Technology group.\n\nThis team member may be located in the Orlando area or remote.\n\nYou will use the latest tools and technology to build and support IZEA's industry-leading data initiatives. Whether contributing to our data pipelines that ingest millions of records per hour, distilling vast amounts of data into meaningful insights, or operating the services that surface those insights, you will help define what the future of Influencer and Content Marketing looks like.\n\nYou will have direct access to end-users and stakeholders, and you are encouraged to build and leverage these relationships in your work. You will write and test your code, and work with our QA team to get it deployed to production. With the help of a homegrown, bot-driven CI/CD pipeline, your code will be delivered to users daily. This cross-functional team leverages Amazon Web Services for everything from ECS for containerized virtualization and hosting, to S3 for durable object storage, RDS and DynamoDB for persistence, EMR for the batch processing of billions of records, and Lambda for distributed workloads and stream consumption.\n\nAside from the day to day, we can offer you incredible benefits including an annual continuing education budget, a trust-focused development process, a flexible and collaborative work environment where balance matters, stock ownership, and an annual company retreat.\n\nThe team strives to be ego-free and motivated only by building amazing software for our users. We seek to understand the "why" behind the "what". We regularly break out into small teams to tackle problems, learn new technologies, or just share what we know with others. We test our code and invest in the health of our systems. We push each other, learn from each other, and strive to continually grow.\n\nPrimary Responsibilities\n\nYou will...\nWork with stakeholders to define the solutions to development problems and business requirements\nDevelop and maintain the features and capabilities of our data ingestion pipelines\nExtract actionable and impactful insights from vast amounts of data\nDevelop and maintain the services that surface those insights and make them available for consumption in a performant manner\nCreate unit and integration tests for your code\nReproduce and fix bugs reported by internal and external users\nSet goals and communicate often about your progress toward them\nContribute to the ongoing improvement of the engineering organization and our software\n\nWhat Weâ\200\231re Looking For\n\nA problem solver at heart\nMuch of our work revolves around problems that have no existing off-the-shelf solution or consensus on best practices. You'll often need to break down large problems into smaller more manageable tasks and utilize critical thinking to come up with novel ideas.\n\n3+ years of engineering experience\nThings move quickly in the data group. Youâ\200\231ll need to be comfortable and familiar with delivering highly scalable cloud based applications.\n\nProfessional experience with frameworks like Hadoop, Spark, or similar\nThis will be a significant component of your responsibilities. You will be working with large data sets in a distributed environment on a daily basis. Familiarity w/ the Hadoop/Spark ecosystem is a must.\n\nProfessional experience with Python data tools\nOur data team uses PySpark and Jupyter notebooks extensively. Familiarity with these technologies will also serve you well.\n\nDirect experience with relational and NoSQL databases technologies\nWe use the best tool for the job around here. When it comes to storing and accessing data, we recognize that the technology decisions we make directly impact our ability to provide a performant customer experience, and our own costs.\n\nExperience designing and building JSON based RESTful APIs\nBecause we are building our application with a front end framework, we carefully design and document the APIs to power it. To help us, we follow the JSON API spec, but any experience in building a RESTful API will be useful. Remember, the API is your contract with the consumers of your data!\n\nUnderstanding of monolithic and micro-service based architectures\nIZEAx still has some legacy monolithic characteristics. We move more and more of our technology to a distributed set of services, there are new challenges to overcome. Understanding the differences between these two models will help you take those challenges head on.\n\nBasic Linux skills\nIn order to develop our data pipelines and services, you need to run it on your laptop. This means opening up some terminal windows, running some commands, and keeping the log output open. Additionally, some of our technology stack is better accessed through CLIs. Examples include the Ember CLI, the Rails CLI/console, Docker commands, Gradle, and our own CLIs. We'll walk you through it, but you should be comfy in a terminal.\n\nAbility to multitask and prioritize multiple incoming requests\nIZEA's Engineering team strives to provide a great experience and great service to our users. In order to do that, you may need to context switch into a support issue or drop what you are doing to start work on something else. This is part of what Agile means to us.\n\nExcellent verbal and written communications skills\nRegular and timely communication is the key to a trust based development process. You should be able to simply and concisely ask for feedback and direction in terms that your audience understands, and relay requested information in a timely fashion to your leaders. You should prioritize documentation of processes and code.\n\nDemonstrated experience with the following will be highly valued\n\nData Science Background\nWhile not required, a fundamental understanding of statistics and modeling would be a great asset. Practical experience with Machine Learning and/or complex data pipelines would also be welcome.\n\nFront End development experience\nFrom time to time you may need to build visualization, or (lite) user facing experiences. Familiarity with modern web frameworks like Angular, React, Ember, or Vue would be helpful. Extra bonus points if you have familiarity w/ Javascript based visualization libraries like d3.js or Highcharts.\n\nAmazon Web Services, or other cloud providers\nIZEA's software is hosted on AWS, and you will need to acquire some familiarity with it. Previous experience in using a cloud provider, even if just for developer tooling, shows that you understand some of the nuances involved in working in the cloud. Familiarity with Amazonâ\200\231s EMR would also be an asset.\n\nContinuous Integration & Deployment\nIZEA needs to get features and fixes out to customers as soon as we possibly can with as much confidence as possible. To facilitate this, we have developed a CI/CD pipeline (using 3rd party services). An understanding of what CI/CD is will help you understand how this pipeline works and how to make it even better.\n\nGitHub\nAll of IZEA's code is source controlled on Github. We leverage Github Pull Requests for code reviews, Github integrations manage parts of our CI/CD pipeline, and Github releases define the code tags that ultimately get deployed. Much of our process documentation exists on Github pages. Familiarity with navigating Github's features will help you ramp up in our SDLC faster.\n\nJIRA\nIZEA uses JIRA to manage projects and report on progress to stakeholders inside and outside the company. While we strive to automate as much of JIRA as possible with bots, webhooks and reports, understanding how JIRA issues, links, attachments, and workflows work will help you understand our SDLC faster.\n\nAbout IZEA:\n\nWe are IZEA: The Creator Marketplace. Our cloud-based technologies connect Brands and Publishers with content Creators who blog, tweet, pin, and post on their behalf.\n\nOur driving belief is that the only way to thrive in our rapidly changing world is to change ahead of it. IZEA is in a constant state of evolution and reinvention. While we may have invented the industry, we still operate like an entrepreneurial, scrappy start-up. Your time here will be exciting, educational, and at times, a bit crazy.\n\nWith IZEA, you have the opportunity to join a non-traditional corporate culture, where creativity and productivity are valued over a suit and tie. We call it "The IZEA Way."\n\n\nWhy would you want to work here?\n\nOur developers use the latest tools and technology to build the applications and services that make up the IZEA Exchange platform. We write in whatever language we need to get the job done including Ruby, Java, Python, PHP, Swift, and Javascript. We leverage the latest frameworks, such as Rails, Symphony2, and Drop Wizard for iterating quickly on our technology.\n\nOur code is deployed and operated by the people who write it, with the help of Amazon Web Services. We leverage everything from EC2 for virtualization and hosting, to Amazon EMR and Machine Learning for advanced analytics.\n\nAside from the day to day, we offer incredible benefits including an annual continuing education budget, a flexible trust-focused development process, and an open collaborative work environment.\n\nOur team is ego-free and motivated to build great software. We regularly break out into small teams to tackle problems, learn new technologies, or just share what we know with others. We test our code and invest in the health of our systems. We push each other, learn from each other, and strive to continually grow as a team.\n\nCalifornia residents, please follow this link to view the types of information we may gather from California residents who are applicants, employees, or contractors of IZEA, and how we use such information.
## 10 The Senior Data Scientist will build and improve analytic pipelines to consolidate multiple data sources, perform analytic processing, and produce actionable business information (e.g., top-10 lists, pattern-matches, exceptions, or lists of #39;needles in the haystack#39;). S/he will apply hands-on development skills and experience as an expert in analytics, data science, pattern recognition, and
## 11 Intuit is seeking a Staff Data Scientist to cover Intuits Consumer Group (TurboTax, Mint, Turbo). We have an exciting opportunity to help shape how we use data to generate hypotheses, surface insights, and build models in order to personalize customer experiences and provide awesome outcomes for the business and our customers. This role will partner closely with data engineering, data analytics, data science, marketing managers, and product management.
## 12 Intuit is hiring a Senior Data scientist to focus on our Consumer Group. We are looking for exceptional talent that can drive customer benefit for our personal finance offerings.
## 13 Group Description\n\nClient Group Technology creates and supports business innovating technology for the Client Group. The team is responsible for business innovation, project and relationship management, data and business intelligence, vendor management, and software development as it relates to the tools used by the over 500 Client Group staff globally.\n\nThe Client Group has a presence in the Americas, Europe, Asia and Australia, and is composed of five main areas:\nSales & Client Services works with financial intermediaries and institutions to offer diversified investment solutions that help clients build and preserve their wealth\nBusiness Development is a conduit to the firmâ\200\231s investment teams and supports our clients and internal business partners through investment-platform and product content, messaging, competitive analysis and education\nProduct Strategy & Development designs, develops and manages the firmâ\200\231s global lineup of investment services and considers clientsâ\200\231 evolving needs to identify new opportunities\nMarketing promotes the firm and its services by creating, packaging and distributing content and messaging to engage diverse audiences through digital platforms and initiatives, strategic campaigns, and events\nBusiness Transformation looks to evolve and scale our business, leveraging digital and data, to drive top-line growth and improve profitability\nDescribe the role:\nThe role of the developer will be database developer / Analyst with excellent knowledge of Relational databases including Microsoft SQL Server, Sybase.\nAnalyze Internal/external, alternative data sources and provide recommendations for integration and enrichment of existing data.\nAssist in management front/middle office requests to ensure alternative data is properly evaluated, implemented, and risk assessed.\nResponsible for the new development and on-going support and maintenance of the various data feeds from external vendors, transfer agents and Platform sales.\nHelp drive continuous improvements to data quality procedures and a consistent approach to how data quality is measured, monitored and reported.\nWork with stakeholders within the organization to refine and improve knowledge and processes.\nParticipate in corporate strategy initiatives which includes advanced data queries and coordinating with management staff on other business management efforts.\nExamine and identify database structural necessities by evaluating client operations, applications, and programming.\nBuild automated process to aid in system performance by performing regular tests, troubleshooting and integrating new features.\nRecommend solutions to improve new and existing database systems.\nEducate staff members through training and individual support.\nOffer support by responding to system problems in a timely manner.\nAssist in our data Analytics / data science efforts\nDeliver well-commented, maintainable code to team standard, including unit and system tests\nLiaison with technical team and Business Analysts within AB and in some cases external vendors to understand and gather requirements and deliver results based on the requirements.\nResponsible for the new development and on-going support and maintenance of the various data feeds from external vendors, transfer agents and Platform sales.\nResponsible for application unit testing, troubleshooting, problem definition and resolution.\nResponsibilities also include documenting system enhancements and production turnover and documentation of new features.\nSupporting production jobs, by providing on-call support and expeditiously resolving data, performance and other issues as and when required.\nFulfilling adhoc requests from power users. Typically, this call for specialized processing that includes extracting specific data from the production database, mass moves of data between different entities, comparing data between Dev and PROD environments.\nDescribe the applications and business or enterprise functions the role supports:\n\nThe Dealer Marketing System (DMS) is the database used by all retail mutual fund applications. HighLine Datawarehouse a brand-new database being built by sourcing data from different systems/applications which will be the backbone of building new application that will sit on top of this Datawarehouse. Dealer Marketing System (DMS) is the database used by all retail mutual fund applications along with Customer Relationship application Salesforce (CRM), one point (data integrity maintenance), Business Intelligence Reports in MicroStrategy and SIMON (Mobile sales force application). It is the system of Record for domestic retail mutual funds sales, asset and transaction data.\n\nSome of the key operations done by DMS are:\nReceiving and processing financial transaction data feeds (purchases, redemptions, exchanges) from domestic and international transfer agencies (DST, Omnibus, CORFAX), Broker Dealer Omnibus and Recordkeeper files\nClient specific Data are ingested such as Demographic data; data packs; Market Share and Recordkeeper files\nGenerating aggregates used by downstream applications (Salesforce, SIMON etc.)\nDistributing data to all linked sites and systems: Domestic and international advisor Websites, Wholesaler portal.\nAllowing members of the Sales Data Management team to manage sales territory definitions, move trades from one region to another etc.\nThe key job responsibilities include, but are not limited to:\nDirectly responsible for the development, maintenance, and support of new and/or existing software applications.\nAnalyze Business requirements and design reports using stored procedure to support business requirements\nDevelop, maintain and support SSIS Packages related to the ETL process with SQL Server Integration Service (SSIS)\nWork with SQL and T-SQL/ANSI SQL to create & alter tables, stored procedures, functions, indexing.\nWork with SQL for query performance tuning, finding and resolving technical issues and developing SQL statements and reports.\nThe developer will also be responsible for shell scripting for batch processing, ETL based integrations in Informatica and few reporting-based applications.\nDirectly work with offshore team members and allocate work and get it completed.\nWhat makes this role unique or interesting?\n\nThis role is targeted to individuals that enjoy building software solutions that are data focused with an emphasis on supporting the sales and marketing process and building your expertise in warehouse, ETL, Business intelligence and data analytics.\n\nWhat is the professional development value of this role?\n\nThis role provides the opportunity to learn the investment business as well as providing the opportunity to advance oneâ\200\231s technical skills with a focus on business intelligence, data and learn more on predictive and prescriptive analytics. Here are just a few of the technical areas this role will focus on: Building predictive & Prescriptive models & visualizations, Relational Database (SQL), Columnar database using Sybase IQ, Unstructured Data (NoSQL), ETL using Informatica & SSIS, using Data Analysis (Python).\n\nJob Qualifications\n\nQualifications, Experience, Education:\nBS in Computer Science/Engineering, Finance, Mathematics/Statistics or a related major\nSkills:\n7+ years of strong development experience on Microsoft SQL Server related Products in Datawarehouse environment supporting various upstream and downstream applications.\nThe role requires in-depth knowledge of SQL Server (up to the latest versions -SQL Server 2016 and 2017)\nStrong experience writing SQL queries using multi-dimensional joins, triggers and stored procedures in SQL Server.\nExperience with object-oriented analysis and design (OOAD), layered software architecture and web technologies.\nExperience creating ETL design and mapping documentation using Informatica, SSIS.\nExperience in shell scripting/PowerShell for batch processes is needed.\nSome database experience in Sybase ASE/Sybase IQ is a plus\nScripting in languages such as Python programing and Azure Cloud experience is a plus\nSystem administration server experience with configuration and support of Microsoft Windows Server, OLEDB/ODBC setups.\nExperience with consuming/creating web services/rest API will be a plus.\nDevOps (CI/CD)\nProven ability to work with multiple cross-functional teams including development, QA and business analysis professionals\nSolid analytical and trouble-shooting skills coupled with experience supporting production environments\nStrong experience on working on multiple projects part of a offshore team within an onsite-offshore model with emphasis on processes, methodologies and structure to manage expectations of US based clients\nMust possess a strong work ethics well as ability to give directions to and manage results of a strong team of .NET and SQL developers\nExcellent business communication skills both written and verbal\nStrong onsite-offshore coordination skills and ability to work with US managers on priorities\nSQL server certified engineer a plus.\nExperience in financial industry with experience working building data interfaces with internal and external vendors and/or banking back office knowledge a plus.\nExperience working with formal project methodologies like waterfall/agile.\nAbility to work independently with minimal supervision.\nExposure to test methodologies\nAbility to write technical documentation\nNashville, Tennessee
## 14 JOB DESCRIPTION:\n\nDo you have a passion for Data and Machine Learning? Do you dream of working with customers on their most forward-looking AI initiatives? Does the challenge of developing modern machine learning solutions to solve real-world manufacturing problems exciting to you?\n\nWe develop software for monitoring semiconductor manufacturing process and are looking to leverage the latest technologies to address our customer's needs. You will be part of a team that investigates and builds solutions based all the data available in factories, ranging from time series data, to post manufacturing data, to production logs. You will be working side by side with application developers and customers on real world problems with actual manufacturing data.\n\nJOB FUNCTION:\n\nBasic and applied research in statistical machine learning, deep learning, and data science as well as signal and information processing to advance the state of the art in time series analysis of semiconductor manufacturing data.\n\nResponsibilities:\nPerform data analysis, data pre-processing, and feature engineering in support of advanced machine learning algorithm development. Incorporate physical and operational insights/constraints into statistical models to achieve a high degree of robustness.\nPrototype algorithms for proof of concept, validation, and software implementation.\nSupport performance evaluations and the transition of algorithms into existing fault detection and classification systems.\nConvey the results of scientific research to sponsors and the scientific community through briefings, conferences and peer-reviewed publications.\nOther related functions as assigned.\n\nREQUIRED QUALIFICATIONS:\nBachelor's degree in computer science or chemical engineering or related technical field.\nDemonstrated ability in machine learning/artificial intelligence (ML/AI) development and/or scientific modelling and data analysis.\nDemonstrated ability with python/MATLAB or similar abstract language. Experience with both traditional ML and modern deep learning approaches.\nExperience with agile development practices and Git version control.\nExperience with one or more of the DNN frameworks like TensorFlow, PyTorch, Chainer.\nExperience with SQL, Graph stores, or NoSQL stores.\nApplicant must have a dynamic skill set, be willing to work with new technologies, be highly organized and capable of planning and coordinating multiple tasks. The position will require attention to detail, effective problem solving skills and excellent judgment. Ability to work independently with sensitive and confidential information, maintain a professional demeanor, work as a team member without daily supervision.\n\nCOMPENSATION & BENEFITS:\n\nCompensation will be commensurate with experience including a competitive base salary, bonus opportunity, competitive benefits package, and relocation assistance.\n\nINFICON, is committed to ensuring that our online application process provides an equal opportunity to all job seekers that apply without regard to race, religion, ethnicity, national origin, citizenship, gender, age, protected veteran status, disability status, genetic information, sexual orientation, or any other protected characteristic. A notice describing Federal equal employment opportunity laws is available here and here to reaffirm this commitment.\n\nPI120660357
## 15 *******Please Apply using this link: https://app.smartsheet.com/b/form/2cb8018ed6a041b0870e3cd056c286ab\nOur Focus\nIn aviation safety, we seek to minimize the potential for harm to the flying public. To support this\neffort, we collect vast amounts of operational and simulation data. How can we use statistical\nanalyses and data science to analyze this data and effect meaningful change to aviation safety? If\nthis sounds interesting to you, GGTI seeks a Data Scientist to join its Data Science & Analytics Team.\nWhatâ\200\231s the Job?\n· Preprocessing, cleansing, and verifying integrity of data that can be provided as input for\nadvanced analytics\n· Ad-hoc analysis and presentations of results for technical and non-technical audiences\n· Feature selection, model building and optimization using machine learning techniques\n· Building regression, classification, association models\n· Building anomaly detection systems\nAbout You\nYou are an experienced Data Scientist with an advanced degree in a quantitative discipline. You will be able to discover information hidden in vast amounts of aviation data, including operation and simulated data, and provide recommendations to managements at various levels. Along with working with air traffic domain experts, you will be able to apply data mining techniques, perform statistical analyses, and build high quality predictive models and causal analyses of safety issues. You can analyze large, complex, multi-dimensional datasets with R and/or Python and Business Intelligence (BI) tools such as Tableau. In addition, you have some familiarity with SQL.\nWhat do you need to qualify?\n*\n· 10+ years of professional work experience in a quantitative field\n· A Ph.D. or Masterâ\200\231s Degree in operations research, applied statistics, Computer Science (data mining, machine learning stream) or a related quantitative discipline\n· Deep understanding of statistical and predictive modeling concepts, machine-learning approaches, clustering and classification techniques, and recommendation and optimization algorithms\n· Excellent applied statistics skills, such as distributions, statistical testing, regression, etc.\n· Excellent understanding of common machine learning techniques and algorithms, such as K-means, k-NN, Bayesian Networks, SVM, Decision trees, ensemble methods etc.\n· Experience with common data science toolkits, such as Python, R, Weka, NumPy, MatLab, etc. Proficiency in at least one of these (preferably Python)\n· Experience with data visualization tools, such as D3.js, Tableau etc.\n· Experience with Amazon Web Services\n· Some experience with NoSQL databases, such as MongoDB, Cassandra, or HBase\n· Ability to pass a Federal Government background investigation to obtain a Public Trust clearance.\n· Current U.S. Citizen/Green Card-holder\nStand-out Skills\nDeep learning with TensorFlow, Torch, or similar\nSpark, Hadoop, or similar\nSQL, Java or C/C++.\n*******Please Apply using this link: https://app.smartsheet.com/b/form/2cb8018ed6a041b0870e3cd056c286ab
## 16 IZEA was built to connect the worldâ\200\231s top brands with influential content creators and publishers to enable influencer marketing and content production at scale. With over 500,000 Creators reaching over 3 billion fans and followers around the globe, IZEA is unmatched in its industry experience, network diversity, and technology ecosystem. A career at IZEA offers countless ways to make an impact in a fast-growing organization!\n\nIZEA is looking for a Data Engineer to join our Core Technology group.\n\nThis team member may be located in the Orlando area or remote.\n\nYou will use the latest tools and technology to build and support IZEA's industry-leading data initiatives. Whether contributing to our data pipelines that ingest millions of records per hour, distilling vast amounts of data into meaningful insights, or operating the services that surface those insights, you will help define what the future of Influencer and Content Marketing looks like.\n\nYou will have direct access to end-users and stakeholders, and you are encouraged to build and leverage these relationships in your work. You will write and test your code, and work with our QA team to get it deployed to production. With the help of a homegrown, bot-driven CI/CD pipeline, your code will be delivered to users daily. This cross-functional team leverages Amazon Web Services for everything from ECS for containerized virtualization and hosting, to S3 for durable object storage, RDS and DynamoDB for persistence, EMR for the batch processing of billions of records, and Lambda for distributed workloads and stream consumption.\n\nAside from the day to day, we can offer you incredible benefits including an annual continuing education budget, a trust-focused development process, a flexible and collaborative work environment where balance matters, stock ownership, and an annual company retreat.\n\nThe team strives to be ego-free and motivated only by building amazing software for our users. We seek to understand the "why" behind the "what". We regularly break out into small teams to tackle problems, learn new technologies, or just share what we know with others. We test our code and invest in the health of our systems. We push each other, learn from each other, and strive to continually grow.\n\nPrimary Responsibilities\n\nYou will...\nWork with stakeholders to define the solutions to development problems and business requirements\nDevelop and maintain the features and capabilities of our data ingestion pipelines\nExtract actionable and impactful insights from vast amounts of data\nDevelop and maintain the services that surface those insights and make them available for consumption in a performant manner\nCreate unit and integration tests for your code\nReproduce and fix bugs reported by internal and external users\nSet goals and communicate often about your progress toward them\nContribute to the ongoing improvement of the engineering organization and our software\n\nWhat Weâ\200\231re Looking For\n\nA problem solver at heart\nMuch of our work revolves around problems that have no existing off-the-shelf solution or consensus on best practices. You'll often need to break down large problems into smaller more manageable tasks and utilize critical thinking to come up with novel ideas.\n\n3+ years of engineering experience\nThings move quickly in the data group. Youâ\200\231ll need to be comfortable and familiar with delivering highly scalable cloud based applications.\n\nProfessional experience with frameworks like Hadoop, Spark, or similar\nThis will be a significant component of your responsibilities. You will be working with large data sets in a distributed environment on a daily basis. Familiarity w/ the Hadoop/Spark ecosystem is a must.\n\nProfessional experience with Python data tools\nOur data team uses PySpark and Jupyter notebooks extensively. Familiarity with these technologies will also serve you well.\n\nDirect experience with relational and NoSQL databases technologies\nWe use the best tool for the job around here. When it comes to storing and accessing data, we recognize that the technology decisions we make directly impact our ability to provide a performant customer experience, and our own costs.\n\nExperience designing and building JSON based RESTful APIs\nBecause we are building our application with a front end framework, we carefully design and document the APIs to power it. To help us, we follow the JSON API spec, but any experience in building a RESTful API will be useful. Remember, the API is your contract with the consumers of your data!\n\nUnderstanding of monolithic and micro-service based architectures\nIZEAx still has some legacy monolithic characteristics. We move more and more of our technology to a distributed set of services, there are new challenges to overcome. Understanding the differences between these two models will help you take those challenges head on.\n\nBasic Linux skills\nIn order to develop our data pipelines and services, you need to run it on your laptop. This means opening up some terminal windows, running some commands, and keeping the log output open. Additionally, some of our technology stack is better accessed through CLIs. Examples include the Ember CLI, the Rails CLI/console, Docker commands, Gradle, and our own CLIs. We'll walk you through it, but you should be comfy in a terminal.\n\nAbility to multitask and prioritize multiple incoming requests\nIZEA's Engineering team strives to provide a great experience and great service to our users. In order to do that, you may need to context switch into a support issue or drop what you are doing to start work on something else. This is part of what Agile means to us.\n\nExcellent verbal and written communications skills\nRegular and timely communication is the key to a trust based development process. You should be able to simply and concisely ask for feedback and direction in terms that your audience understands, and relay requested information in a timely fashion to your leaders. You should prioritize documentation of processes and code.\n\nDemonstrated experience with the following will be highly valued\n\nData Science Background\nWhile not required, a fundamental understanding of statistics and modeling would be a great asset. Practical experience with Machine Learning and/or complex data pipelines would also be welcome.\n\nFront End development experience\nFrom time to time you may need to build visualization, or (lite) user facing experiences. Familiarity with modern web frameworks like Angular, React, Ember, or Vue would be helpful. Extra bonus points if you have familiarity w/ Javascript based visualization libraries like d3.js or Highcharts.\n\nAmazon Web Services, or other cloud providers\nIZEA's software is hosted on AWS, and you will need to acquire some familiarity with it. Previous experience in using a cloud provider, even if just for developer tooling, shows that you understand some of the nuances involved in working in the cloud. Familiarity with Amazonâ\200\231s EMR would also be an asset.\n\nContinuous Integration & Deployment\nIZEA needs to get features and fixes out to customers as soon as we possibly can with as much confidence as possible. To facilitate this, we have developed a CI/CD pipeline (using 3rd party services). An understanding of what CI/CD is will help you understand how this pipeline works and how to make it even better.\n\nGitHub\nAll of IZEA's code is source controlled on Github. We leverage Github Pull Requests for code reviews, Github integrations manage parts of our CI/CD pipeline, and Github releases define the code tags that ultimately get deployed. Much of our process documentation exists on Github pages. Familiarity with navigating Github's features will help you ramp up in our SDLC faster.\n\nJIRA\nIZEA uses JIRA to manage projects and report on progress to stakeholders inside and outside the company. While we strive to automate as much of JIRA as possible with bots, webhooks and reports, understanding how JIRA issues, links, attachments, and workflows work will help you understand our SDLC faster.\n\nAbout IZEA:\n\nWe are IZEA: The Creator Marketplace. Our cloud-based technologies connect Brands and Publishers with content Creators who blog, tweet, pin, and post on their behalf.\n\nOur driving belief is that the only way to thrive in our rapidly changing world is to change ahead of it. IZEA is in a constant state of evolution and reinvention. While we may have invented the industry, we still operate like an entrepreneurial, scrappy start-up. Your time here will be exciting, educational, and at times, a bit crazy.\n\nWith IZEA, you have the opportunity to join a non-traditional corporate culture, where creativity and productivity are valued over a suit and tie. We call it "The IZEA Way."\n\n\nWhy would you want to work here?\n\nOur developers use the latest tools and technology to build the applications and services that make up the IZEA Exchange platform. We write in whatever language we need to get the job done including Ruby, Java, Python, PHP, Swift, and Javascript. We leverage the latest frameworks, such as Rails, Symphony2, and Drop Wizard for iterating quickly on our technology.\n\nOur code is deployed and operated by the people who write it, with the help of Amazon Web Services. We leverage everything from EC2 for virtualization and hosting, to Amazon EMR and Machine Learning for advanced analytics.\n\nAside from the day to day, we offer incredible benefits including an annual continuing education budget, a flexible trust-focused development process, and an open collaborative work environment.\n\nOur team is ego-free and motivated to build great software. We regularly break out into small teams to tackle problems, learn new technologies, or just share what we know with others. We test our code and invest in the health of our systems. We push each other, learn from each other, and strive to continually grow as a team.\n\nCalifornia residents, please follow this link to view the types of information we may gather from California residents who are applicants, employees, or contractors of IZEA, and how we use such information.
## 17 IZEA was built to connect the worldâ\200\231s top brands with influential content creators and publishers to enable influencer marketing and content production at scale. With over 500,000 Creators reaching over 3 billion fans and followers around the globe, IZEA is unmatched in its industry experience, network diversity, and technology ecosystem. A career at IZEA offers countless ways to make an impact in a fast-growing organization!\n\nIZEA is looking for a Data Engineer to join our Core Technology group.\n\nThis team member may be located in the Orlando area or remote.\n\nYou will use the latest tools and technology to build and support IZEA's industry-leading data initiatives. Whether contributing to our data pipelines that ingest millions of records per hour, distilling vast amounts of data into meaningful insights, or operating the services that surface those insights, you will help define what the future of Influencer and Content Marketing looks like.\n\nYou will have direct access to end-users and stakeholders, and you are encouraged to build and leverage these relationships in your work. You will write and test your code, and work with our QA team to get it deployed to production. With the help of a homegrown, bot-driven CI/CD pipeline, your code will be delivered to users daily. This cross-functional team leverages Amazon Web Services for everything from ECS for containerized virtualization and hosting, to S3 for durable object storage, RDS and DynamoDB for persistence, EMR for the batch processing of billions of records, and Lambda for distributed workloads and stream consumption.\n\nAside from the day to day, we can offer you incredible benefits including an annual continuing education budget, a trust-focused development process, a flexible and collaborative work environment where balance matters, stock ownership, and an annual company retreat.\n\nThe team strives to be ego-free and motivated only by building amazing software for our users. We seek to understand the "why" behind the "what". We regularly break out into small teams to tackle problems, learn new technologies, or just share what we know with others. We test our code and invest in the health of our systems. We push each other, learn from each other, and strive to continually grow.\n\nPrimary Responsibilities\n\nYou will...\nWork with stakeholders to define the solutions to development problems and business requirements\nDevelop and maintain the features and capabilities of our data ingestion pipelines\nExtract actionable and impactful insights from vast amounts of data\nDevelop and maintain the services that surface those insights and make them available for consumption in a performant manner\nCreate unit and integration tests for your code\nReproduce and fix bugs reported by internal and external users\nSet goals and communicate often about your progress toward them\nContribute to the ongoing improvement of the engineering organization and our software\n\nWhat Weâ\200\231re Looking For\n\nA problem solver at heart\nMuch of our work revolves around problems that have no existing off-the-shelf solution or consensus on best practices. You'll often need to break down large problems into smaller more manageable tasks and utilize critical thinking to come up with novel ideas.\n\n3+ years of engineering experience\nThings move quickly in the data group. Youâ\200\231ll need to be comfortable and familiar with delivering highly scalable cloud based applications.\n\nProfessional experience with frameworks like Hadoop, Spark, or similar\nThis will be a significant component of your responsibilities. You will be working with large data sets in a distributed environment on a daily basis. Familiarity w/ the Hadoop/Spark ecosystem is a must.\n\nProfessional experience with Python data tools\nOur data team uses PySpark and Jupyter notebooks extensively. Familiarity with these technologies will also serve you well.\n\nDirect experience with relational and NoSQL databases technologies\nWe use the best tool for the job around here. When it comes to storing and accessing data, we recognize that the technology decisions we make directly impact our ability to provide a performant customer experience, and our own costs.\n\nExperience designing and building JSON based RESTful APIs\nBecause we are building our application with a front end framework, we carefully design and document the APIs to power it. To help us, we follow the JSON API spec, but any experience in building a RESTful API will be useful. Remember, the API is your contract with the consumers of your data!\n\nUnderstanding of monolithic and micro-service based architectures\nIZEAx still has some legacy monolithic characteristics. We move more and more of our technology to a distributed set of services, there are new challenges to overcome. Understanding the differences between these two models will help you take those challenges head on.\n\nBasic Linux skills\nIn order to develop our data pipelines and services, you need to run it on your laptop. This means opening up some terminal windows, running some commands, and keeping the log output open. Additionally, some of our technology stack is better accessed through CLIs. Examples include the Ember CLI, the Rails CLI/console, Docker commands, Gradle, and our own CLIs. We'll walk you through it, but you should be comfy in a terminal.\n\nAbility to multitask and prioritize multiple incoming requests\nIZEA's Engineering team strives to provide a great experience and great service to our users. In order to do that, you may need to context switch into a support issue or drop what you are doing to start work on something else. This is part of what Agile means to us.\n\nExcellent verbal and written communications skills\nRegular and timely communication is the key to a trust based development process. You should be able to simply and concisely ask for feedback and direction in terms that your audience understands, and relay requested information in a timely fashion to your leaders. You should prioritize documentation of processes and code.\n\nDemonstrated experience with the following will be highly valued\n\nData Science Background\nWhile not required, a fundamental understanding of statistics and modeling would be a great asset. Practical experience with Machine Learning and/or complex data pipelines would also be welcome.\n\nFront End development experience\nFrom time to time you may need to build visualization, or (lite) user facing experiences. Familiarity with modern web frameworks like Angular, React, Ember, or Vue would be helpful. Extra bonus points if you have familiarity w/ Javascript based visualization libraries like d3.js or Highcharts.\n\nAmazon Web Services, or other cloud providers\nIZEA's software is hosted on AWS, and you will need to acquire some familiarity with it. Previous experience in using a cloud provider, even if just for developer tooling, shows that you understand some of the nuances involved in working in the cloud. Familiarity with Amazonâ\200\231s EMR would also be an asset.\n\nContinuous Integration & Deployment\nIZEA needs to get features and fixes out to customers as soon as we possibly can with as much confidence as possible. To facilitate this, we have developed a CI/CD pipeline (using 3rd party services). An understanding of what CI/CD is will help you understand how this pipeline works and how to make it even better.\n\nGitHub\nAll of IZEA's code is source controlled on Github. We leverage Github Pull Requests for code reviews, Github integrations manage parts of our CI/CD pipeline, and Github releases define the code tags that ultimately get deployed. Much of our process documentation exists on Github pages. Familiarity with navigating Github's features will help you ramp up in our SDLC faster.\n\nJIRA\nIZEA uses JIRA to manage projects and report on progress to stakeholders inside and outside the company. While we strive to automate as much of JIRA as possible with bots, webhooks and reports, understanding how JIRA issues, links, attachments, and workflows work will help you understand our SDLC faster.\n\nAbout IZEA:\n\nWe are IZEA: The Creator Marketplace. Our cloud-based technologies connect Brands and Publishers with content Creators who blog, tweet, pin, and post on their behalf.\n\nOur driving belief is that the only way to thrive in our rapidly changing world is to change ahead of it. IZEA is in a constant state of evolution and reinvention. While we may have invented the industry, we still operate like an entrepreneurial, scrappy start-up. Your time here will be exciting, educational, and at times, a bit crazy.\n\nWith IZEA, you have the opportunity to join a non-traditional corporate culture, where creativity and productivity are valued over a suit and tie. We call it "The IZEA Way."\n\n\nWhy would you want to work here?\n\nOur developers use the latest tools and technology to build the applications and services that make up the IZEA Exchange platform. We write in whatever language we need to get the job done including Ruby, Java, Python, PHP, Swift, and Javascript. We leverage the latest frameworks, such as Rails, Symphony2, and Drop Wizard for iterating quickly on our technology.\n\nOur code is deployed and operated by the people who write it, with the help of Amazon Web Services. We leverage everything from EC2 for virtualization and hosting, to Amazon EMR and Machine Learning for advanced analytics.\n\nAside from the day to day, we offer incredible benefits including an annual continuing education budget, a flexible trust-focused development process, and an open collaborative work environment.\n\nOur team is ego-free and motivated to build great software. We regularly break out into small teams to tackle problems, learn new technologies, or just share what we know with others. We test our code and invest in the health of our systems. We push each other, learn from each other, and strive to continually grow as a team.\n\nCalifornia residents, please follow this link to view the types of information we may gather from California residents who are applicants, employees, or contractors of IZEA, and how we use such information.
## 18 The Position\n\n\nStatistical Scientist-Personalized Healthcare, USMA\n\nJob Description Summary\n\nBiostatistics and Data Sciences (BDS) is a cross-functional group in the Evidence Generation Medical Unit in US Medical Affairs (USMA), comprising Biostatisticians, Data Scientists, and Data Managers. We are strategic partners in generating integrated evidence through data & analytics by driving data and analytical strategy; execution by ensuring efficient and rigorous execution; and by integrating reliable, rigorous, relevant (R3) evidence to improve patient care.\n\nAs a Biostatistician with Personalized Health Care (PHC) focus in BDS, you will be responsible for supporting PHC projects, initiatives, and related activities in Oncology or non-Oncology therapeutic areas, depending on interest and experience. As such, you will be a standing member and key partner of multi-disciplinary teams, to provide statistical, strategic, and analytical leadership. You are expected to combine expertise in data science with business knowledge to identify, define, and solve the right problems and to provide strategic input to maximize business impact. Teamwork and effective communication are essential in this position. Experience in advanced analytic areas such as machine learning including deep learning, bioinformatics, genomic and image analysis are highly valued.\n\nJob Description\n\nAs a Biostatistician with Personalized Health Care (PHC) focus in BDS, you will be responsible for supporting PHC projects, initiatives, and related activities in Oncology or non-Oncology therapeutic areas. PHC projects. As such, you will be a standing member and key partner of multi-disciplinary teams, to provide statistical, strategic, and analytical leadership. You are expected to combine expertise in data science with business knowledge to identify, define, and solve the right problems and to provide strategic input to maximize business impact. Teamwork and effective communication are essential in this position. Experience in advanced analytic areas such as machine learning including deep learning, bioinformatics, genomic and image analysis are highly valued.\n\nThe PHC statistical scientist will support both molecule and non-molecule specific projects. For molecule specific projects, you will partner with the Biostatistics Program Directors, Medical Partners, Network of Scientists, and other cross-functional partners on medical, scientific, and patient-care applications. You may also work closely with PHC counterparts in Pharmaceutical Development and potentially Pharmaceutical Research and Early Development to align and collaborate on projects across varying drug development phases. For non-molecule specific projects, you are expected to have expertise and scientific curiosity in a variety of data types, including, but not limited to, genomics, imaging, and digital health data or related fields to support Medical Affairs and PHC activities; and to collaborate with research and biomarker scientists, computational biologists and computer scientists.\n\nYou will be expected to perform your responsibilities with increased expertise and independence. Where assigned, you will also act as Medical Affairs representatives in related review and decision-making forums or committees, including, where applicable, representing Biostatistics input and data in internal or external meetings, presentations and communications.\n\nAll Roche employees are expected to effectively contribute to cross-functional collaboration and coordination and comply with all laws, regulations, policies & procedures that govern our business.\n\nKey Accountabilities\nContributes to the strategic, statistical and analytical direction, insights, recommendations and overall input into development and execution of PHC strategies, initiatives, and projects\nDevelops, implements and oversees timely, thorough, consistent, and quality execution of analytical plans that support PHC strategies, initiatives, and projects; assures statistical integrity, adequacy and accuracy\nRepresents BDS in the US PHC Center of Excellence and provides appropriate strategic and statistical input on PHC-related activities\nIdentifies, recommends and undertakes activities that advance healthcare delivery and PHC to improve patient-care and outcomes\nGenerates evidence using diverse data sources and appropriate analyses techniques\nConsults and provides statistical training to key stakeholders\nParticipates in the ongoing development, refinement and enhancements of departmental methodologies and techniques for biostatistics research, analysis and reporting\nAs needed, represents U.S. Medical Affairs Biostatistics to global Roche partners and works collaboratively to coordinate and align on analytical plans\nStays informed and abreast of the external landscape as it relates to new developments in advanced analytics, assigned molecules, products, programs and the associated therapeutic area(s)\nJob Qualifications\n\nCandidates for this position should hold the following qualifications, have the following experience, and be able to demonstrate the following knowledge, skills and abilities to be considered as a suitable applicant. Please note that except where specified as preferred, or as a plus, all points listed below are considered minimum requirements. Job level will be matched according to candidates experiences.\nPh.D. or M.Sc. with 2+ years relevant industry experience, in Statistics, Biostatistics, or related fields\nSound knowledge of theoretical and applied statistics including methods in advanced analytics such as machine learning\nProven experience in the principles and techniques of data analysis, interpretation and clinical relevance\nRelevant therapeutic experience is a plus\nGood understanding of regulatory guidelines in drug development, GCP (Good Clinical Practice) and ICH (International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use) guidelines\nSound understanding and appropriate application of business requirements and processes\nWorking experience with all data types including high dimensional data\nSkills/Abilities\nProven abilities to perform statistical scientist responsibilities with increasing expertise and independence\nHas demonstrated, through past experience, abilities to competently manage the majority of biostatistics deliverables associated with assigned preÂlaunch, launch and/or post-Âmarketing medical strategies, plans and tactics\nAnalytical and problem solving capabilities and skills\nEvident Drive for Results (demonstrates interest and ability to learn new things, takes initiative, welcomes problems as challenges; finds solutions to technical problems)\nIn-Âdepth knowledge of the multiÂdisciplinary functions involved in a drug development process, and can proactively integrate multiple perspectives into the post-Âmarketing process for best end results\nExcellent project management skills\nStrong interpersonal, verbal and writing communication, and influencing skills\nProven track record of working highly effectively, efficiently and collaboratively with others\nProven experience and skills working with multiÂdisciplinary teams\nHigh level of proficiency in computer skills and statistical programming languages, such as, SAS, R, RShiny, Python, etc.\nWho We Are\n\n\nA member of the Roche Group, Genentech has been at the forefront of the biotechnology industry for more than 40 years, using human genetic information to develop novel medicines for serious and life-threatening diseases. Genentech has multiple therapies on the market for cancer & other serious illnesses. Please take this opportunity to learn about Genentech where we believe that our employees are our most important asset & are dedicated to remaining a great place to work.\n\nThe next step is yours. To apply today, click on the "Apply online" button.\n\nGenentech is an equal opportunity employer & prohibits unlawful discrimination based on race, color, religion, gender, sexual orientation, gender identity/expression, national origin/ancestry, age, disability, marital & veteran status. For more information about equal employment opportunity, visit our Genentech Careers page.\n\nJob Facts\nJOB FUNCTION\nBiometrics COMPANY/DIVISION\nPharmaceuticals SCHEDULE\nFull time JOB TYPE\nRegular
## 19 Job Description\nWe are looking for Data Scientist with Python, Tensorflow, SKLEARN, pandas,numpy skills. Experience 2-8 years.
## 20 Search by Keyword\nMore Options\n\nSearch by Location\n\nClear\n\nLoading...\n\nDepartment\n\nAll\n\nState\n\nAll\n\n×\n\nSend me alerts every days\nalert frequency in every certain days\nCreate Alert\n\nform.emailsubscribe-form {\ndisplay: none;\n}\n×\n\nSend me alerts every days\nalert frequency in every certain days\n\nShare this Job\n\nAddThis\nEmail\nFacebook\nTwitter\nLinkedIn\nPinterest\n\nApply now »\n\n\n\nManager / Lead, Data Science & Analytics\n\nDate:\n\nJun 6, 2020\n\nLocation:\n\nDanvers, MA, US\n\n#job-location.job-location-inline {\ndisplay: inline;\n}\n\nAbiomed is a pioneer and global leader in healthcare technology and innovation, with a mission of Recovering Hearts & Saving Lives. With corporate headquarters in Danvers, Massachusetts, offices in Aachen & Berlin, Germany and Tokyo, Japan, Abiomedâ\200\231s 1,400 employees form one of the fastest growing medical device companies in the world. We attract and retain exceptional talent with our collaborative culture, passion for our work, and a strong commitment to employee professional development.\n\nPatients First | Innovation | Winning Culture | Heart Recovery\n\nThe Abiomed Applied Research group is a team of scientists and engineers building the companyâ\200\231s next generation of data-driven algorithms and creating Abiomedâ\200\231s data consolidation, extraction and analytics platform. We are looking for a hands-on team lead for a growing data science and analytics team. She or he will work closely with stakeholders in clinical operations, medical affairs, regulatory, marketing, and other business units to design solutions for a variety of business needs. The Data Science and Analytics team lead will also work closely with our data engineering team to utilize and request extensions of existing internal API gateways. Ultimately, she or he will efficiently deliver high quality data analyses for different stakeholders within Abiomed. The Data Analytics Lead, as a core member of the Applied Research team, will play a central role in fulfilling Abiomedâ\200\231s mission of recovering heart and saving lives. We are looking for a creative self-starter who thrives in an agile environment and enjoys a fast-paced, dynamic research group.\n\nResponsibilities:\nAnalyze KPIs and key business analytics\nAccountable for the development of direct reports through performance reviews, instruction, mentoring and coaching. This includes setting goals, conducting periodic and year end performance reviews, proactively identifying and addressing any areas of concern, providing development opportunities to team members and as needed, escalating performance and/or personnel issues\nDevelop, maintain, and improve holistic data sets, reports, and dashboards to monitor company performance\nAnalyze business intelligence and support the business by communicating key trends, insights, risks, and findings to management\nUse a data-informed approach to inform and drive day-to-day focus and operational efficiencies\nManage project portfolio by assigning resources according to priorities and keeping stakeholders informed on progress\nPerforms other duties and projects as assigned\nJob Requirements:\nM.S. or Ph.D. Data Science, Statistics, or other related area of study.\nMinimum of 5 years of relevant work-related experience, 10+ years preferred\nExperience formally leading teams of individuals across all levels (i.e., entry-level to Principal)\nExperience in project management for a data science and/or analytics team\nExperience conducting statistical analyses of various types including (but not limited to) distributional tests, autoregressive models, trend analysis, Bayesian models\nExperience using APIs to request data from our data warehouse and data lake (experience with clinical data a plus, experience with database design/ETL a plus)\nExperience in building and unit testing production-level algorithms and predictive analytics pipelines\nSolid experience in data analytics applications (Excel, R, SAS, Matlab)\nDemonstrated data visualization experience (Power BI, Tableau, D3.js/javascript, etc)\nExperience with SQL, Python, C/C++, Agile framework\nSolid experience in machine learning toolkits (Scikit Learn, TensorFlow, Keras, etc)\nExperience working with cloud database technologies, such as Azure Web Services, Amazon Web Services, PostGres\nStrong critical thinking and organizational skills; including customer facing and strong communication skills. Needs to interpret data and make recommendations to business leaders and customers.\nStrong communicator who work cross-functionally on defining the scope of projects\nStrong work ethic and high level of personal integrity and accountability\nDemonstrated excellence of design thinking and keen observational skills\nSelf-motivated and good team player\nWilling to learn and explore new approaches to data analytics\nIndependent, efficient, and able to manage several concurrent projects\nAbiomed is an Equal Opportunity Employer committed to a diverse workforce. Abiomed will not discriminate against any worker or job applicant on the basis of race, color, religion, gender, gender identity, national origin, ancestry, age, sexual orientation, gender identity, marital or civil partnership status, pregnancy, gender reassignment, non-job related mental or physical disability, genetic information, veteran status, military service, application for military service, or membership in any other category protected under law.\n\nNearest Major Market: Boston\n\nApply now »\n\nFind similar jobs:\nEngineering & R&D\n\nApply now »\n\n\n\nManager / Lead, Data Science & Analytics\n\nDate:\n\nJun 6, 2020\n\nLocation:\n\nDanvers, MA, US\n\n#job-location.job-location-inline {\ndisplay: inline;\n}\n\nAbiomed is a pioneer and global leader in healthcare technology and innovation, with a mission of Recovering Hearts & Saving Lives. With corporate headquarters in Danvers, Massachusetts, offices in Aachen & Berlin, Germany and Tokyo, Japan, Abiomedâ\200\231s 1,400 employees form one of the fastest growing medical device companies in the world. We attract and retain exceptional talent with our collaborative culture, passion for our work, and a strong commitment to employee professional development.\n\nPatients First | Innovation | Winning Culture | Heart Recovery\n\nThe Abiomed Applied Research group is a team of scientists and engineers building the companyâ\200\231s next generation of data-driven algorithms and creating Abiomedâ\200\231s data consolidation, extraction and analytics platform. We are looking for a hands-on team lead for a growing data science and analytics team. She or he will work closely with stakeholders in clinical operations, medical affairs, regulatory, marketing, and other business units to design solutions for a variety of business needs. The Data Science and Analytics team lead will also work closely with our data engineering team to utilize and request extensions of existing internal API gateways. Ultimately, she or he will efficiently deliver high quality data analyses for different stakeholders within Abiomed. The Data Analytics Lead, as a core member of the Applied Research team, will play a central role in fulfilling Abiomedâ\200\231s mission of recovering heart and saving lives. We are looking for a creative self-starter who thrives in an agile environment and enjoys a fast-paced, dynamic research group.\n\nResponsibilities:\nAnalyze KPIs and key business analytics\nAccountable for the development of direct reports through performance reviews, instruction, mentoring and coaching. This includes setting goals, conducting periodic and year end performance reviews, proactively identifying and addressing any areas of concern, providing development opportunities to team members and as needed, escalating performance and/or personnel issues\nDevelop, maintain, and improve holistic data sets, reports, and dashboards to monitor company performance\nAnalyze business intelligence and support the business by communicating key trends, insights, risks, and findings to management\nUse a data-informed approach to inform and drive day-to-day focus and operational efficiencies\nManage project portfolio by assigning resources according to priorities and keeping stakeholders informed on progress\nPerforms other duties and projects as assigned\nJob Requirements:\nM.S. or Ph.D. Data Science, Statistics, or other related area of study.\nMinimum of 5 years of relevant work-related experience, 10+ years preferred\nExperience formally leading teams of individuals across all levels (i.e., entry-level to Principal)\nExperience in project management for a data science and/or analytics team\nExperience conducting statistical analyses of various types including (but not limited to) distributional tests, autoregressive models, trend analysis, Bayesian models\nExperience using APIs to request data from our data warehouse and data lake (experience with clinical data a plus, experience with database design/ETL a plus)\nExperience in building and unit testing production-level algorithms and predictive analytics pipelines\nSolid experience in data analytics applications (Excel, R, SAS, Matlab)\nDemonstrated data visualization experience (Power BI, Tableau, D3.js/javascript, etc)\nExperience with SQL, Python, C/C++, Agile framework\nSolid experience in machine learning toolkits (Scikit Learn, TensorFlow, Keras, etc)\nExperience working with cloud database technologies, such as Azure Web Services, Amazon Web Services, PostGres\nStrong critical thinking and organizational skills; including customer facing and strong communication skills. Needs to interpret data and make recommendations to business leaders and customers.\nStrong communicator who work cross-functionally on defining the scope of projects\nStrong work ethic and high level of personal integrity and accountability\nDemonstrated excellence of design thinking and keen observational skills\nSelf-motivated and good team player\nWilling to learn and explore new approaches to data analytics\nIndependent, efficient, and able to manage several concurrent projects\nAbiomed is an Equal Opportunity Employer committed to a diverse workforce. Abiomed will not discriminate against any worker or job applicant on the basis of race, color, religion, gender, gender identity, national origin, ancestry, age, sexual orientation, gender identity, marital or civil partnership status, pregnancy, gender reassignment, non-job related mental or physical disability, genetic information, veteran status, military service, application for military service, or membership in any other category protected under law.\n\nNearest Major Market: Boston\n\nApply now »
## 21 Function: Engineering & Technology\n\nNearest Major Market: Hampton Roads
## 22 Deepen understanding and usage of data across the enterprise.\n\nAt Ameritas , fulfilling life is what we do daily. We continuously strive to help our customers and employees enjoy life at its very best by reducing uncertainty, helping grow assets an
## 23 About Us:\n\nHeadquartered in beautiful Santa Barbara, HG Insights is the global leader in technology intelligence. HG Insights uses advanced data science methodologies to help the world's largest technology firms and the fastest growing companies accelerate their sales, marketing, and strategy efforts.\n\nWe offer a casual yet professional environment. Get your sweat on at one of our fitness classes or go for a run along the beach which is two blocks away. You can find employees riding bikes to lunch in the funk zone or hanging out in one of our collaboration spaces. We are passionate about our jobs with a get-it-done attitude, yet we don't take ourselves too seriously.\n\nWhat You'll Do:\n\nWe are looking for a data scientist with software development or data engineering background to join our research team which reports directly to the CTO. We are a rapidly growing company with small focused engineering teams that deliver innovative features to a fast growing market. We build big-data systems utilizing cutting edge technologies and solutions that allow for our engineers to continuously learn and develop while shipping amazing products.\n\nQualities/ Experience:\nSelf-learner, hacker, technology advocate who can work on anything\nAmazing engineering skills, you're on your way to being the one of the best engineers you know\nYou can architect, design, code, test, and mentor others\nExperience working with interesting and successful projects\nThrive in a fast growing environment\nExcellent written and spoken English communication\nAn interest in Machine Learning and Natural Language Processing\nWhat You'll Be Responsible For:\nBuild solutions for text classification, entity linking, and entity extraction and other related projects\nScaling machine learning and NLP projects to run against large datasets in virtualized environments.\nYou will collaborate with Product Development Teams to build the most effective solutions\nYou will develop features in our databases, backend apps, front end UI, and Data as a Service (DAAS) product\nYou will help architect and design large scale enterprise big-data systems\nYou will work on ideas from different team members as well as your own\nFix bugs rapidly\nAttend daily stand-up meetings, planning sessions, encourage others, and collaborate at a rapid pace\nWhat You'll Need:\nBS, MS, or Ph.D. in Computer Science or related technical discipline\nExperience Natural Language Processing, preferably in a commercial setting.\nExperience building Logistic Regression Models\nProficient in Python and Jupyter as well as related data science libraries (such as Scikit-learn, NLTK, SpaCy, Tensorflow)\nProficient in Java or Scala (3+ years experience recommended)\nExperience with MySQL, ElasticSearch, ESB, Hadoop, Spark, or other related data processing/database technologies\nExperience with Amazon Web Services (EC2, S3, RDS, EMR, ELB, etc.)\nExperience with web services using REST in Java\nActual coding experience in large distributed environments with multiple endpoints and complex interactions\nComfortable in an agile development environment\nUnderstanding and have real world experience using design patterns\nComfortable programming on a Mac with Intellij and other tools\nHG Insights Company is an Equal Opportunity Employer\n\nPlease note that HG Insights does not accept unsolicited resumes from recruiters or employment agencies. In the event of a recruiter or agency submitting a resume or candidate without a signed agreement being in place, we explicitly reserve the right to pursue and hire such candidates without any financial obligation to the recruiter or agency. Any unsolicited resumes, including those submitted directly to hiring managers, are deemed to be the property of HG Insights
## 24 Role Description\nAs a data scientist at Triplebyte, youâ\200\231ll have the opportunity to work on a variety of challenges to help us scale. You'll be part of a small team, who are leveraging data to fix technical hiring. Your day to day, will include a mix of dataset acquisition, statistical modeling, exploratory data analysis, and software engineering. Youâ\200\231ll report directly to Triplebytes' Head of Machine Learning and will work alongside a team of 6-8 machine learning engineers and data scientists.\nFields your work will touch on\nPsychometrics\nRecommender systems\nTime series analysis\nSurvival analysis\nBayesian inference\nProbabilistic programming\nThis is an ideal role for a data scientist who wants the scope and responsibility to own features/products from the inception and research phase through to measuring real-world results.\n\nAbout Triplebyte\nTriplebyte is a hiring marketplace used by companies like Apple, Dropbox, Stripe, and Instacart to hire the best technical talent. We are on a mission to build a meritocratic hiring process, and we do all our evaluation background blind. Our ultimate goal is to collect the largest dataset and use this to build the world's best technical hiring process. No other company has successfully done what we're doing in this field.\nWe are growing extremely quickly, working on a problem that is fundamental, sitting at the crossroads between the workforce and employers. Ten years from now, it'll look silly to use anything other than Triplebyte for technical hiring.\nCompany Culture\nWe have a laid-back, friendly office culture. Over lunch you'll often find us discussing the latest in technology, books, and pop culture, and then maybe getting in a quick game of chess or babyfoot (foosball).\nSince we're an early-stage company, we move fast, and it's important that each member of our team is able to take ownership of projects by defining problems, brainstorming solutions, and running experiments.
## 25 The Lead Certified Clinical Laboratory Scientist provides leadership support to the technical lab staff. This person is also responsible for being the workflow and troubleshooting point person in the technical lab. This position is primarily responsible for coordinating and conducting day-to-day activities in the laboratory to ensure timely and accurate testing. The Lead Certified Clinical Laboratory Scientist works directly with the Clinical Laboratory Supervisor to anticipate and resolve issues related to efficiencies in throughput, staffing, and laboratory processes. This position is also responsible for verifying result release and pending list management and investigation. The Lead is expected to spend at approximately half of their time performing laboratory testing on the bench.\nShift: Saturday - Tuesday, 8:00pm - 6:30am\n\nPerforms highly complex analytic processes without direct supervision. Utilizes routine and specialized automated and non-automated laboratory procedures and/or techniques after completing training and demonstrating competency according to established lab section operating procedures.\nOperates laboratory instruments, performance maintenance as needed and ensures proper functioning of laboratory equipment.\nIdentifies errors and problems, assists in troubleshooting instrument malfunction(s), and may assist service and support in performing preventive and corrective maintenance and repairs on laboratory equipment.\nCompletes required maintenance activities on equipment, recognizes and elevates potential issues to the team members responsible.\nMaintains records and documentation of maintenance.\nCalibrates laboratory instruments to ensure accuracy of test results. Performs quality control procedures as specified and maintains quality control records and documentation necessary to meet the standards of accrediting agencies.\nUnderstands appropriate specimen collection, handling, and transport procedures.\nPrepares specimens for analysis and determines acceptability of samples within guidelines.\nVerifies accuracy, and enters data in the laboratory computer system, along with appropriate explanatory or interpretive information.\nAssists the department supervisor in ensuring that all section turn-around times are maintained.\nValidates acceptability of test results by review of quality control and all other test parameters.\nIdentifies the technical, instrumental, and/or physiologic causes of unexpected test results.\nEvaluates and calculates quality control statistics to assess accuracy, reproducibility, and validity of current laboratory methods.\nMonitors quality assurance and assists in data collection and preparation of QA indicators.\nPerforms internal and external proficiency testing. Handles proficiency testing samples in the same manner as patient samples.\nMeets work product output expectations.\nUnderstand and assist with Quality Control Specialist in performing lot to lot and analyzer to analyzer testing.\nComplies with all safety and hazard regulations as outlined in the Clinical Laboratory Safety Manual and ensures that all employees are following laboratory and regulatory guidelines. Understands, maintains, and enforces safety guidelines.\nUtilizes statistical methods to assess laboratory testing.\nObserves and demonstrates principles of data security and patient confidentiality.\nMaintains ethical standards in the performance of testing and in interactions with patients, co-workers, and other health care professionals.\nPerforms analytical and decision-making functions without direct supervision.\nPrioritizes order of testing based on priority of request, workload, and testing schedules to meet turn-around times. Coordinates general workflow in assigned area daily to ensure workload is satisfied.\nDifferentiates technical, instrumental, and/or physiologic causes for unexpected test results.\nResolves and documents resolution of all QC results which fail lab criteria and institutes corrective action.\nEvaluates instrument/method failure and determines when back-up methods must be initiated.\nResponds to technical questions of laboratory staff and others in Exact Sciences Labs.\nParticipates in continuing education and staff meetings. Responsible for own professional development.\nAssists with training and competency of employees.\nAssists with knowledge transfer of changes and additions to laboratory procedures, processes and policies, including methodology and instrument operation.\nMay be requested to give lectures or provide demonstrations. May assist the education coordinator in the development of objectives, learning activities and evaluation mechanisms.\nProvides technical information and/or instruction to clients, new employees, medical students, residents, peers, and the public as requested and where appropriate.\nContributes to design, research, review and writing of laboratory procedures. Remains informed of procedure updates and changes and ensures employees demonstrate knowledge and competency regarding changes.\nPrimary source of contact for staff in the laboratory area for technical and administrative problem solving.\nConducts check in meetings with members of their respective shift.\nParticipates in year-end reviews of employees by providing input and delivering the review.\nMaintains adequate inventory of reagents and supplies.\nSuggests cost-effective laboratory procedures and protocol changes.\nApplies step by step thinking, problem solving and critical thinking patterns.\nSupervises laboratory personnel as assigned.\nProvide specimen processing leadership as needed.\nImplements changes as assigned in response to new technology and laboratory procedures.\nReviews daily data reports as requested.\nReports test results through ESSS (Exact Sciences Software System).\nMaintains open and effective communication with personnel in work team, and with members of other teams throughout the laboratory.\nDemonstrates willingness to cooperate with team members and with cross-functional groups in the laboratory to accomplish timely and accurate testing.\nDemonstrates professional demeanor, in personal appearance and behavior, in all work-related interactions inside and outside of the laboratory.\nDemonstrates adaptability by embracing changes in the laboratory with a positive attitude.\nProvides constructive criticism for modification of laboratory procedures, policies and test/employee scheduling.\nInteracts with other healthcare workers to solve problems and interprets patient lab results within the framework of medical technology.\nAbility to respond to stakeholder requests in a professional and timely manner.\nExceptional written and verbal communication skills and strong attention to detail.\nAbility to train others on technical concepts and test for understanding.\nPosition requires work in normal laboratory environment. Special uniform and personal protective equipment are required while working in the laboratory.\nUphold company mission and values through accountability, innovation, integrity, quality, and teamwork.\nSupport and comply with the companyâ\200\231s Quality Management System policies and procedures.\nRegular and reliable attendance.\nAbility to use near vision to view samples at close range.\nAbility to perform the essential functions of the job such as hear timers go off, etc.\nAbility to perform the essential functions of the job such as communicating with staff, patients, colleagues, and providers.\nAbility to grasp with both hands; pinch with thumb and forefinger; turn with hand/arm; reach above shoulder height.\nAbility to perform the essential functions of the job such as the stress of reporting STAT lab work and performing multiple lab tests simultaneously.\nAbility to lift and move 20-40 pounds on an occasional basis (up to 25% of time).\nAbility to stand, walk, bend and reach on a regular basis (standing â‰\21075% of time; sitting â‰\21025% of time).\nAbility to grasp with both hands; pinch with thumb and forefinger; turn with hand/arm; reach above shoulder height.\nAbility to listen and speak on the telephone and write simultaneously.\nAbility to operate telephone system and computer keyboard and mouse.\n\nMinimum Qualifications\nBachelor's degree in Clinical Laboratory Science or Medical Technology or in the chemical or biological sciences. Must satisfy the education requirements of the applicable certifying agency, i.e. ASCP.\nPossession of appropriate certification at time of hire from one of the nationally recognized certification agencies, i.e. ASCP or state licensure that has been determined to be equivalent, and maintain throughout employment in position.\n6+ years relevant experience in a clinical laboratory desired for Certified Lead roles responsible for testing New York and California samples.\n4+ years of relevant experience in a clinical laboratory desired for the Certified Lead roles that are not responsible for New York or California sample testing.\nAuthorization to work in the United States without sponsorship.\nDemonstrated ability to perform the Essential Duties of the position with or without accommodation.\nPreferred Qualifications\nExperience in molecular testing desired.\n\nWe are an equal employment opportunity employer. All qualified applicants will receive consideration for employment without regard to age, color, creed, disability, gender identity, national origin, protected veteran status, race, religion, sex, sexual orientation, and any other status protected by applicable local, state or federal law. Applicable portions of the Companyâ\200\231s affirmative action program are available to any applicant or employee for inspection upon request.
## 26 Job Title: Data Scientist\n\nLocation: New Jersey\n\nDuration: Long (Contract)\n\nRate: $60/hr.\n\nClient: DXC\n\nÂ\n\nWho are we looking for?\n\nLooking for Data Scientist resource - Who has Sound knowledge in the product support and implementation\n\nÂ\n\nTechnical Skills:\n\n5+ years of hands-on experience as a Data Scientist with â\n\nâ Undertaking data collection, preprocessing and analysis\n\nâ Building models to address business problems\n\nâ Presenting information using data visualization techniques\n\nâ Experience in data mining, business intelligence tools, data frameworks\n\nâ Understanding of machine-learning and operations research\n\nâ Knowledge of R, SQL and Python; familiarity with languages such Scala, Java or C++ would be an advantage
## 27 Job Description\nWe are looking for the sharpest of the sharp Data Scientists with 2-5 years experience. Our client will accept candidates with academic experience only, if the candidate has some interesting projects under their belt.\n\nOur client wants an advanced degree, PhD preferred. Experience with Python and SQL Server is required.\n\nYou'll need great communication skills for this job; the ability to make presentations to the C level and board is important. You must be authorized to work in the USA, no sponsorship is available.\n\nThis company is rapidly growing in the package delivery space. Great atmosphere, good benefits.
## 28 Posting Title\nData Scientist / Machine Learning Expert\n\n04-Feb-2020\n\nJob ID\n288341BR\n\nJob Description\nONE Global Discovery Chemistry Community working across 7 disease areas at the Novartis Institutes for BioMedical Research (NIBR) is seeking a highly talented and motivated Data Scientist to join our Global Discovery Chemistry Department in Cambridge, MA. The successful candidate will join an energizing and collaborative research organization, working alongside colleagues who are committed to improving human health through the discovery of transformative medicines.\n\nWe are seeking a unique data scientist with the skills, experience and passion to extract new knowledge and disruptive insights from the large and rich body of data collected by one of the worldâ\200\231s leading pharmaceutical companies. You will be a member of our global Computer-Aided Drug Discovery (CADD) group, an interdisciplinary team of expert molecular modelers, cheminformaticians, and data scientists. Teamed up with domain experts from biology, chemistry and translational medicine, this is a unique opportunity to develop and apply cutting-edge machine learning technologies to uncover insights to real-world drug discovery problems and innovate paths to new medicines.\n\nYour responsibilities include:\nâ\200¢Develop and implement methods for extracting patterns and correlations from both internal and external data sources using machine learning toolkits\nâ\200¢Develop workflows for conducting comparative analysis among Novartisâ\200\231 diverse data sources as well as generalizing approaches developed in-house or externally.\nâ\200¢Enable open-source solutions for internal use and implement cutting-edge published scientific methods.\nâ\200¢Develop customized machine learning solutions including data querying and knowledge extraction.\nâ\200¢Interact and be part of interdisciplinary project teams to drive effective decision-making by mining and developing predictive models\nâ\200¢Develop new skills in the area of cheminformatics and drug discovery and leverage those to accelerate development of new machine learning algorithms\nâ\200¢Keep ahead of scientific literature and interact with internal and external scientists to integrate novel data science technologies\n\nMinimum requirements\nEducation:\n\nAdvanced degree (M.Sc. or higher) in data science and machine learning, statistics, computer sciences, cheminformatics, mathematics, computational chemistry, computational biology, bioinformatics, or related field.\n\nMinimum experience & skills:\n\nâ\200¢In-depth experience with modern and classical machine learning methods\nâ\200¢Strong statistical foundation with broad knowledge of supervised and unsupervised techniques\nâ\200¢Programming experience (preferred Python, R, C++) preferably in Linux and high-performance computing environments\nâ\200¢Good listener - strong, concise, and consistent written and oral communication\nâ\200¢Talent for communicating stories through data visualizations\nâ\200¢Proven ability to collaborate with others\nâ\200¢A passion for tackling challenging problems and developing creative solutions\nâ\200¢A drive for self-development with a focus on scientific know-how\n\nAdditional qualifications that will help in the role:\nâ\200¢Demonstrated impact using machine learning libraries, such as scikit-learn, PyTorch or similar in a cheminformatics context\nâ\200¢Hands on experience with data analysis software such as Spotfire, R-shiny or similar\nâ\200¢Working experience with open-source cheminformatics toolkits such as RDKit\nâ\200¢Working experience with source-code management systems such as Git/github/bitbucket\nâ\200¢Familiar with the foundational concepts in molecular biology, pharmacology or medicine. Working knowledge of medicinal chemistry and drug discovery is a plus\n\nWhy consider Novartis?\n\n750 million. Thatâ\200\231s how many lives our products touch. And while weâ\200\231re proud of that fact, in this world of digital and technological transformation, we must also ask ourselves this: how can we continue to improve and extend even more peopleâ\200\231s lives?\n\nWe believe the answers are found when curious, courageous and collaborative people like you are brought together in an inspiring environment. Where youâ\200\231re given opportunities to explore the power of digital and data. Where youâ\200\231re empowered to risk failure by taking smart risks, and where youâ\200\231re surrounded by people who share your determination to tackle the worldâ\200\231s toughest medical challenges.\n\nWe are Novartis. Join us and help us reimagine medicine.\n\nJob Type\nFull Time\n\nCountry\nUSA\n\nWork Location\nCambridge, MA\n\nFunctional Area\nResearch & Development\n\nDivision\nNIBR\n\nBusiness Unit\nGlobal Discovery Chemistry\n\nEmployment Type\nRegular\n\nCompany/Legal Entity\nNIBRI\n\nEEO Statement\nThe Novartis Group of Companies are Equal Opportunity Employers and take pride in maintaining a diverse environment. We do not discriminate in recruitment, hiring, training, promotion or any other employment practices for reasons of race, color, religion, gender, national origin, age, sexual orientation, marital or veteran status, disability, or any other legally protected status.\n\nShift Work\nNo
## 29 Requirements for the two data scientists for Rupeshrsquos team 4+ years of work experience efficient coders heavy python skills and experience (must pass a python test to be considered) big query experience is a big plus, but strong SQL will do machine learning (TensorFlow, Neural Networks, Deep Learning) Job Description look at existing code, understand it, follow through the process Code Validation going through major GCP business transformation look at legacy processes and code created on-prem, check against new GCP version, then make sure it's stable and verify results. recommend best practices. New Development create new processes for new product development
## 30 Role: Data Scientist.\nLocation: Foster City, CA\nHire Type: 12 Months Contract\n\nJob Description:\nAdvanced degree in Data Science, Statistics, Computer Science, or similar.\nExtensive experience as a Data Scientist.\nProficiency in R or Python, where the former is preferred.\nIn-depth understanding of SQL.\nCompetent in machine learning principles and techniques.\nDemonstrable history of devising and overseeing data-centered projects.\nAbility to relay insights in layman's terms, such that these can be used to inform business decisions.\nOutstanding supervision and mentorship abilities.\nCapacity to foster a healthy, stimulating work environment that frequently harnesses teamwork.
## 31 Job Description\nOur Northeast Life client has a great opportunity for an ASA/FSA with 5+ years of actuarial experience, including 2+ years in predictive modeling. The ideal candidate would have strong project management skills, excellent communication, and good familiarity with life underwriting. Experience study for mortality and lapse background strongly preferred. Excel, SQL, and R required, Python, Tableau are a plus. (#48561)
## 32 Deepen understanding and usage of data across the enterprise.\n\nAt Ameritas , fulfilling life is what we do daily. We continuously strive to help our customers and employees enjoy life at its very best by reducing uncertainty, helping grow assets an
## 33 Ready to write the best chapter of your career? XSELL Technologies is an artificial intelligence company focused on increasing sales. Our cloud-based machine learning engine uses predictive analytics and natural language processing to equip sales professionals with the best real-time responses, driving improved conversion rates and customer experiences. We pride ourselves on our high performing, collaborative culture. We are passionate about our product, our clients, and our industry leading results.\n\nXSELL is currently seeking a Data Scientist to serve as a key member of our Data Science team. This role will work within the SAFe Agile framework of continuous delivery, work with other Data Scientists, and be a critical resource for our growing Data Science staff.\n\nJob Description\nDesigning, implementing and improving XSELLâ\200\231s proprietary softwares, real time messaging recommendation systems, HiPerCoBot and QABOt using various mathematical, Natural Language Processing (NLP) algorithms and machine learning models.\nDeveloping new systems using Python NLP (lemmatization, stemming, named entity recognition) and updating the existing systems by adding new features to enhance the customers experience.\nInteracting with stakeholders to identify the requirements and define new processes.\nClosely work with the team to define the architecture of the Microservice environment and deploy systems using RESTful APIâ\200\231s.\nPlanning, development, and analysis of dialogue prediction/sentiment models, causal models of customer/agent behavior.\nImproving the accuracy of the in house XSELL Recommendation System software suite by evaluating and monitoring annotations manually labelled by Ontology team.\nAnalyzing and researching the current market trends for possible AI opportunities/improvements.\nQualifications\nMS/PhD in a quantitative discipline (e.g. Machine Learning, Data Science, Computer Science, Mathematics, Statistics, or a related field).\n3+ years full time employment or postdoctoral experience building and validating predictive models on structured or unstructured data.\nProficiency in Python with knowledge of basic libraries for data manipulation, machine learning, natural language processing and text mining.\nExperience creating and using advanced machine learning algorithms (regression, neural networks, etc.)\nUnderstanding of deep learning algorithms and experience with related libraries such as TensorFlow, PyTorch.\nNice to Have\nExperience in SQL.\nUnderstanding of object oriented design.\nAbility to craft new concepts and stay current with academic research\nExperience with graph databases and graph designs.\nExperience in designing/developing production level machine learning architectures (using Flask or Redis)\nWe provide competitive compensation, generous benefits, and a professional atmosphere. XSELL fosters an entrepreneurial, results driven work environment where you will have the opportunity to be part of a collaborative, inclusive team and be able to grow and develop your professional career.\n\nXSELL Technologies is an Equal Employment Opportunity Employer and all employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.\n\nLocation: Chicago, IL\n\nRole: Full-Time / Salaried
## 34 Human Factors Scientist\nID\n\n3336\n\nLocation\n\n\nPhoenix, AZ\n\nPractice/Center\n\n\nHuman Factors\n\nApply Now\n\nExponent is a leading engineering and scientific consulting firm. Our multidisciplinary team of scientists, engineers, physicians, and regulatory consultants brings together more than 90 different disciplines to solve complicated problems facing corporations, insurers, government entities, associations and individuals. Our approximately 1000 staff members work in 26 offices across the United States and abroad. Exponent has over 800 consultants, including more than 500 that have earned a doctorate in their chosen field of specialization.\n\nExponent's Human Factors Practice is seeking a Scientist for our Phoenix, AZ office. This position will be responsible for analyzing human performance related to a wide variety of real world scenarios involving automobiles, occupational work and consumer products, and for serving as project manager on a potentially large number of such engagements.\n\nAdditional responsibilities will include:\nSupporting a wide range of consulting activities that involve human factors in accidents and their prevention, including limitations of human perceptual, cognitive, and response capabilities\nDeveloping client contacts and helping to grow future business opportunities\nWorking on multi-disciplinary projects spread across the nation\nProviding case management, data processing, and analytical project support\nConducting research to obtain and review technical data, scientific literature, and standards\nParticipating in the design and evaluation of warnings, consumer products, and safety information\nThe ideal candidate for this position has a strong interest in applying their training to human performance questions involving operator behavior, occupational safety, consumer-product interaction, and environment navigation, and also possesses a strong scientific background in the foundations of cognition, perception, and human decision-making. Candidates should furthermore be comfortable working in a collaborative and fast-paced professional engineering and scientific consulting environment, which demands a high degree of self-motivation, dedication, and resilience to stress.\n\nQualifications for this position include:\nPh.D. in Experimental or Cognitive Psychology/Neuroscience\nExcellent written and verbal communication skills as well as strong interpersonal skills\nStrong mathematical aptitude and proficiency with statistics\nSolid foundation in science of cognition, perception, and human decision making\nConsummate project management skills and ability to juggle multiple teams concurrently with an eye for detail\nMust be assertive, a self-starter, and able to work with accuracy under pressure and/or\nTo learn more about life at Exponent, check out our Graduate Students page at www.exponent.com/careers/grad-students!\n\nWe are an Affirmative Action, Equal Employment Opportunity, Veterans and Disabled Employer.
## 35 Job Title : Data Scientist\n\nJob Location : Washington DC USA\n\n\n\nYou'll need to have:\nBachelor's degree or eight or more years of work experience.\nSix or more years of relevant work experience.\nExperience working with and creating data architectures.\nExperience using statistical computer languages (Python, Scala, PySpark, Java, SQL, etc.) to manipulate data and draw insights from large data sets.\nExperience with distributed data/computing tools: Tez, Map/Reduce, Hadoop, Hive, Spark etc.\nStrong problem solving skills with an emphasis on product development.\nExcellent written and verbal communication skills for coordinating across teams.\nA drive to learn and master new technologies and techniques.\nNote: Only USC\n\n]]>
## 36 Join our team dedicated to developing and executing innovative solutions in support of customer mission success.\n\nJob Description:\n\nNovetta is seeking a Data Scientist who wants to develop innovative solutions for customers and internal product teams. We look to rapidly prototype solutions and deploy the most promising of them. We identify and leverage the latest techniques (fast.ai is a team favorite) so that our customers can stay one step ahead. On every project you'll learn something new (and likely teach us something as well). If that sounds appealing to you - we'd love to chat.\n\nResponsibilities include:\nDevelop solutions spanning multiple subject areas, from NLP to Image and Video.\nMaintain awareness of state-of-the-art machine learning and techniques, methods and platforms, including commercial and open source.\nImplement, configure and test machine learning and deep learning libraries and platforms (e.g. fast.ai, TensorFlow, Keras, XGBoost, LightGBM).\nTest solutions on AWS using services such as SageMaker, EC2, and Snowball Edge.\nWrite blog posts and presentations that clearly communicate complex machine learning concepts to both technical and non-technical audiences.\nContribute to visually-appealing, web-enabled prototype applications that illustrate relevant machine learning capabilities.\nBasic Qualifications:\nExperience with Python\nExperience with machine learning or statistics\nAbility to work both independently and collaboratively.\nHigh levels of curiosity, creativity, and problem-solving capabilities.\nStrong written and verbal communication skills.\nComfortable navigating the command line.\nDesired Skills:\nResearch experience in Machine Learning specific to Natural Language Processing, Computer Vision, or deep learning.\nExperience with managing data and creating algorithms using AWS.\nExperience with R, Java, or other programming languages.\nSecurity Clearance:\nMust be eligible to obtain and maintain a TS/SCI with Poly clearance\n\nNovetta, from complexity to clarity.\n\nNovetta delivers highly scalable advanced analytics and secure technology solutions to address challenges of national and global significance. Focused on mission success, Novetta pioneers disruptive technologies in machine learning, data analytics, full-spectrum cyber, cloud engineering, open source analytics, and multi-INT fusion for Defense, Intelligence Community, and Federal Law Enforcement customers. Novetta is headquartered in McLean, VA with over 1,000 employees across the U.S.\n\nOur culture is shaped by a commitment to our core values:\n\nIntegrity â\200¢ We hold ourselves accountable to the highest standards of integrity and ethics.\n\nCustomer Success â\200¢ We strive daily to exceed expectations and achieve customer mission success.\n\nEmployee Focus â\200¢ We invest in our employees' professional development and training, respecting individuality and fostering a culture of diversity and inclusion.\n\nInnovation â\200¢ We know that discovering new and innovative ways to solve problems is critical to our success and makes us a great company.\n\nExcellence in Execution â\200¢ We take pride in flawless execution as we build a company that is best in class.\n\n\nEarn a REFERRAL BONUS for the qualified people you know.\nFor more details or to submit a referral, visit bit.ly/NovettaReferrals.\n\nNovetta is an equal opportunity/affirmative action employer.\nAll qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.
## 37 Job Description\n\n\nOversight is a leading provider of cloud-based artificial intelligence solutions automating and analyzing financial payment transactions to identify fraud, non-compliant purchases, and wasteful spending. Oversight analyzes over $2 trillion in expenditures annually at Fortune 5000 companies and government agencies worldwide.\n\nIn this position as a Data Scientist, you will be working with a team of skilled and motivated data scientists and machine learning engineers designing and building AI solutions to expand our product line.\n\nResponsibilities\n\n\nOur team is at the fore front of building, operating and supporting business systems with machine learning at their core.\n\nAs a Data Scientist you will be incorporating machine learning into our products in order to transform how companies find and manage spend risk throughout their organization. You will collaborate with cross functional teams on complex data science problems and be responsible for solution design, data preparation, model building, and model management.\n\nUltimately, weâ\200\231re looking for people who are excited by machine learning, data and technology, love to solve problems, constantly challenge themselves to provide the best user experience, can work well alone, have a wide range of skills and are able and willing to constantly learn.\n\nSkills\n\n\nTechnical Skills/Requirements\n3+ years real world experience in a data science or equivalent role\nExtensive experience designing and maintaining a machine Learning pipeline â\200“ ETL, feature engineering, modeling, predicting, explaining, deploying and ongoing diagnostics\nExcellent pattern recognition and predictive modeling skills\nExperience communicating complex models and designing\nStrong experience with data processing and data analytics\nExperience analyzing a wide variety of structured and unstructured data\nStrong communication and data presentation skills\nExperience in both R&D and commercial software product development environment is a plus\nTechnologies\nExpertise in python, scikit-learn, and pandas\nExperience with Deep learning frameworks (FastAI , Keras, PyTorch etc.) preferred\nSQL\nLinux\n\nInterested and qualified candidates should submit a resume with salary requirements to recruiter@oversightsystems.com.\n\nAbout Oversight\n\nThe worldâ\200\231s largest companies and government agencies trust Oversight to identify enterprise spend risk no matter where it resides in their organization. Oversightâ\200\231s AI-powered platform drives financial transformation by reducing audit effort and eliminating data silos to monitor and analyze 100% of spend transactions with unparalleled accuracy. With full visibility into spend, Oversight customers find and prioritize employee-initiated and third-party spend risk that would otherwise go undetected by in-house processes. By identifying process breakdowns and making corrections early, Oversight helps optimize budgets, reduce out-of-policy spending by 70% while maximizing audit efficiency and eliminating cash leakage.\n\nOversight is an equal opportunity employer.\n\nJob Applicant Privacy Notice
## 38 The Senior Data Scientist will build and improve analytic pipelines to consolidate multiple data sources, perform analytic processing, and produce actionable business information (e.g., top-10 lists, pattern-matches, exceptions, or lists of #39;needles in the haystack#39;). S/he will apply hands-on development skills and experience as an expert in analytics, data science, pattern recognition, and
## 39 Job Posting Title:\n\nInformation Systems Engineering Specialist (Engineering Scientist)\n\n----\n\nHiring Department:\n\nApplied Research Laboratories\n\n----\n\nPosition Open To:\n\nAll Applicants\n\n----\n\nWeekly Scheduled Hours:\n\n40\n\n----\n\nFLSA Status:\n\nExempt\n\n----\n\nEarliest Start Date:\n\nImmediately\n\n----\n\nPosition Duration:\n\nExpected to Continue\n\n----\n\nLocation:\n\nPICKLE RESEARCH CAMPUS\n\n----\n\nJob Description:\n\nSupport operation and development of networked remote sensor systems and computer infrastructure.\n\n----\n\nJob Details:\n\nResponsibilities\nDevelop and manage automated scripts for remote sensor system monitoring, and data transfer and archiving.\nArchitect and design new networked processing systems.\nPerform system administration of existing remote sensor systems and computer infrastructure.\nOther related functions as assigned including occasional travel to support field testing and meetings.\nRequired Qualifications\n\n\nBachelor's degree in engineering, computer and information science or other applied sciences and three years of experience in the same. Experience in the design of networked information systems. Demonstrated ability in Linux OS system administration (including OS/software installation and configuration) and maintenance. Demonstrated ability in scripting (e.g. Bash, cron) to automate system operation and monitoring. Ability to work independently with sensitive and confidential information, maintain a professional demeanor, work as a team member without daily supervision and effectively communicate with diverse groups of clients. Able to work under pressure and accept supervision. Regular and punctual attendance.\n\nU.S. Citizen. Applicant selected will be subject to a government security investigation and must meet eligibility requirements for access to classified information at the level appropriate to the project requirements of the position.\n\nPreferred Qualifications\n\n\nMaster's degree in one of the above areas. Ten or more years of related experience. Experience in performing network performance analysis, establishing and maintaining computer clusters, administration/development of classified information systems, cloud development, GIT, CMake/make, FAI, Debian packaging system, Python, C++. Cumulative GPA of 3.0.\n\nGeneral Notes\n\n\nAn agency designated by the federal government handles the investigation as to the requirement for eligibility for access to classified information. Factors considered during this investigation include but are not limited to allegiance to the United States, foreign influence, foreign preference, criminal conduct, security violations, drug involvement, the likelihood of continuation of such conduct, etc.\n\nPlease mark "yes" on the application question that asks if additional materials are required. Failure to attach all additional materials listed below may result in a delay in application processing.\n\nSalary Range\n\n\n$82,000-$129,000+/negotiable depending on qualifications.\n\nWorking Conditions\nStandard office conditions\nUse of manual dexterity\nRepetitive use of a keyboard at a workstation\nSome weekend, evening and holiday work\nPossible interstate/intrastate travel.\nRequired Materials\nResume/CV\n3 work references with their contact information; at least one reference should be from a supervisor\nLetter of interest\nImportant for applicants who are NOT current university employees or contingent workers: You will be prompted to submit your resume the first time you apply, then you will be provided an option to upload a new Resume for subsequent applications. Any additional Required Materials (letter of interest, references, etc.) will be uploaded in the Application Questions section; you will be able to multi-select additional files. Before submitting your online job application, ensure that ALL Required Materials have been uploaded. Once your job application has been submitted, you cannot make changes.\n\nImportant for Current university employees and contingent workers: As a current university employee or contingent worker, you MUST apply within Workday by searching for Find UT Jobs. If you are a current University employee, log-in to Workday, navigate to your Worker Profile, click the Career link in the left hand navigation menu and then update the sections in your Professional Profile before you apply. This information will be pulled in to your application. The application is one page and you will be prompted to upload your resume. In addition, you must respond to the application questions presented to upload any additional Required Materials (letter of interest, references, etc.) that were noted above.\n\n----\n\nEmployment Eligibility:\n\nRegular staff who have been employed in their current position for the last six continuous months are eligible for openings being recruited for through University-Wide or Open Recruiting, to include both promotional opportunities and lateral transfers. Staff who are promotion/transfer eligible may apply for positions without supervisor approval.\n\n----\n\nRetirement Plan Eligibility:\n\nThe retirement plan for this position is Teacher Retirement System of Texas (TRS), subject to the position being at least 20 hours per week and at least 135 days in length.\n\n----\n\nBackground Checks:\n\nA criminal history background check will be required for finalist(s) under consideration for this position.\n\n----\n\nEqual Opportunity Employer:\n\nThe University of Texas at Austin, as an equal opportunity/affirmative action employer, complies with all applicable federal and state laws regarding nondiscrimination and affirmative action. The University is committed to a policy of equal opportunity for all persons and does not discriminate on the basis of race, color, national origin, age, marital status, sex, sexual orientation, gender identity, gender expression, disability, religion, or veteran status in employment, educational programs and activities, and admissions.\n\n----\n\nPay Transparency:\n\nThe University of Texas at Austin will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractorâ\200\231s legal duty to furnish information.\n\n----\n\nEmployment Eligibility Verification:\n\nIf hired, you will be required to complete the federal Employment Eligibility Verification I-9 form. You will be required to present acceptable and original documents to prove your identity and authorization to work in the United States. Documents need to be presented no later than the third day of employment. Failure to do so will result in loss of employment at the university.\n\n----\n\nE-Verify:\n\nThe University of Texas at Austin use E-Verify to check the work authorization of all new hires effective May 2015. The universityâ\200\231s company ID number for purposes of E-Verify is 854197. For more information about E-Verify, please see the following:\nE-Verify Poster (English) [PDF]\nE-Verify Poster (Spanish) [PDF]\nRight To Work Poster (English) [PDF]\nRight To Work Poster (Spanish) [PDF]\n----\n\nCompliance:\n\nEmployees may be required to report violations of law under Title IX and the Jeanne Clery Disclosure of Campus Security Policy and Crime Statistics Act (Clery Act). If this position is identified a Campus Security Authority (Clery Act), you will be notified and provided resources for reporting. Responsible employees under Title IX are defined and outlined in HOP-3031.\n\nThe Clery Act requires all prospective employees be notified of the availability of the Annual Security and Fire Safety report. You may access the 2018 report here or obtain a copy at University Compliance Services, 1616 Guadalupe, Suite UTA 2.206, Austin, TX 78701.
## 40 About Us\n\n\nInterested in working for a human-centered technology company that prides itself on using modern tools and technologies? Want to be surrounded by intensely curious and innovative thinkers?\n\nSeeking to solve complex technical challenges by building products that work for people, meet and exceed the needs of businesses, and work elegantly and efficiently?\n\nModeling ourselves after the 1904 World's Fair, which brought innovation to the region, 1904labs is seeking top technical talent in St. Louis to bring innovation and creativity to our clients.\n\nOur clients consist of Fortune 500 and Global 2000 companies headquartered here in St. Louis. We partner with them on complex projects that range from reimagining and refactoring their existing applications, to helping to envision and build new applications or data streams to operationalize their existing data. Working in a team-based labs model, using our own flavor of #HCDAgile, we strive to work at the cutting edge of technology's capabilities while solving problems for our clients and their users.\n\nThe Role\n\n\nAs a Data Scientist with 1904labs, you will be working with other data scientists, data engineers, and developers to provide analytical services to clients. You will leverage the clients' data assets to answer business questions using best practices and innovative approaches. This will mean employing various data science tools (Python, apache spark, TensorFlow) and techniques (dimensionality reduction, unsupervised learning, supervised learning) to create and test machine learning algorithms. Once tested and verified you will work with development teams to implement the algorithm as a scalable, hosted service.\n\nWe're looking for an experienced and creative data scientist who can conceptualize a project from start to finish, identifying the appropriate data and methodologies to use. Our Data Scientists work with open-source machine learning packages and libraries, so this position involves building custom ML/AI utilities from open-source resources in languages such as Python (preferred) and R. In this role, you will own and share technical solutions with both technical and non-technical stakeholders across multiple teams both internally and with external clients.\n\nResponsibilities\nPerform data analysis to understand the right combinations of data to meet outlined objectives\nTranslate client queries into actionable data pulls and help translate outputs into client strategies\nBuild and evaluate predictive modeling & machine learning utilities to produce meaningful outcomes that enable data-led decisions\nTranslate analysis results into actionable insights\nPresent insights to both technical and non-technical audiences\nPartner with internal data scientists, developers, and tech teams to develop new methodologies and utilities\nWork with development teams to implement data science algorithms as scalable, hosted services\nYour Skills\nRequirements\nBackground in math including linear algebra, statistics, probability, and numerical analysis.\nMachine Learning Analysis: Must be able to execute, evaluate, and apply various models such as logistic regression, random forests/decision tree, nearest neighbor, neural net, support vector machine, an ensemble of multiple models, etc\nProgramming: Proficient coding in a language suited for machine learning systems such as Python or R (not reliant on GUI-based systems to execute analysis) for the purpose of cleaning, manipulating, visualizing, and analyzing data\nDesired Skills\nExperience with software engineering and developing deployable API services\nMust be adaptable and have the drive to learn new technologies and frameworks to support developing client solutions\nUsing browser based interactive computational environments (like Jupyter Notebook) to perform analysis activities and test / evaluate data models and algorithms.\nTools: Spark, Pandas, SciKit-Learn, TensorFlow, Keras, SQL\nCommunication: Must be able to explain analysis process and translate results into actionable insights for both technical and non-technical audiences\nTeamwork: Must be accountable for individual responsibilities while working collaboratively with development and engineering teams to achieve project deliverables\nComfort in an agile development environment (eg. writing stories, participating in workshops, sprint planning and retros)\nPerks\nBenefits Program (medical, dental, life insurance, 401(k), professional and personal development, PTO)\nInnovation Time - we allow 10% of your time to be devoted to innovation hours. This time can be used to foster individual ideas, personal projects, start up ideas, improve an open source tool or for career advancement and self-education. All during traditional working hours.\nDress Code - we don't have one\nThese roles are located in St. Louis, MO. While we would prefer local candidates your location is not the most important factor; please help us understand why you would like to call St. Louis home if you would be relocating.\n\n]]>
## 41 Date Posted\n2018-10-22\n\nLocation\nVarious Locations\n\nJob Title\nData Scientist\n\nJob ID\nIMGDS18\nApply this Job\n\nJob Description\n\nIMG Systems, Inc. seeks Masterâ\200\231s+1yr/Bachelorâ\200\231s+5yrs exp/equiv. Data Scientist (IMGDS18): Data Visualization, Natural Language Processing (NLP), Machine Learning (ML), Python, Numpy. Mail resume with job ID to: HR, 400 Chisholm Place, Suite 414, Plano, TX 75075. Travel to unanticipated work sites throughout the U.S. Foreign equiv. accepted.
## 42 Location: Redmond, WA\nClient: Microsoft (Un-Managed)\n\nJob Description:\nData Analyzer: strong problem-solving skills and hands-on coding skills with\nSQL, python,\nDeep Learning Modeler: understanding NLP embedding, Vision, transformer, etc\nRecommendation algorithms/IR: Understand IR or ML really well can explain MF,\nlatent, KNN, Community cluster\nApplied researcher: who can code really well and prepare training data him/\nherself.\nPhD Degree is mandatory in any stream.\n\nJaya Krishna\nTalent Acquisition Specialist
## 43 Candidates should have the following background, skills and characteristics: Â Experience developing analytic applications using Java/Scala and related technologies in the Spark/Hadoop ecosystem Seasoned experience with Big Data programming frameworks, such as Apache Spark and Kafka Expert-level proficiency in Scala and Java is must Experience in machine learning techniques, large scale optimization and building/supporting production ML systems is a big plus Bachelorâ\200\231s degree in a quantitative discipline required (masters or PhD preferred) Â Â
## 44 â\200¢ 3+ years related a professional experience â\200¢ Proven achievements resulting from predictive modeling â\200¢ SAS programming experience/certifications, ideally including time series PROC methods â\200¢ Degree in operations research, industrial engineering, computer science, applied math, physics, economics, or other quantitative science (graduate degree a plus) â\200¢ Experience with languages used for querying (e.g. SQL/Hive) â\200¢ Proven ability to succeed in both collaborative and independent work environments â\200¢ Ability to leverage experience to scale and validate models â\200¢ Experience working with business partners to validate the output of analytical models
## 45 *Organization and Job ID**\nJob ID: 310918\n\nDirectorate: National Security Directorate\n\nDivision: Computing & Analytics\n\nGroup: Applied Statistics & Computational Modeling\n*Job Description**\nThis position is for a Scientist in Data Science and Applied or Mathematical Statistics who will provide scientific and technical research within the National Security Directorate (NSD) supporting the data analytics capabilities of the Computational Analytics Division. The Scientist will develop and apply fundamental mathematical or statistical theory to advance analytic solutions in various scientific domains. The Scientist must be able to draw from a strong statistical background, in areas including experimental design, advanced statistical modeling (e.g. mixed linear models, nonparametric models, etc.), machine learning, and computer simulations, to recognize patterns, characterize uncertainty, and develop predictive models using structured and unstructured data. The Scientist will produce solutions driven by domain science and mathematical statistical science from complex and high-dimensional datasets and design, develop, and evaluate advanced algorithms that lead to optimal value extraction from data. In addition to technical research this position expects participation and career development in task management, proposal writing, business development, and publishing.\n\nOperating on the data-information-knowledge continuum, staff at PNNL employ diverse methods to confront significant problems of national interest from distilling distributed data into knowledge that supports decision processes to enabling resilient technologies that enhance computing at extreme scales. Our research portfolio spanning from basic to applied includes statistical modeling and experimental design, applied statistics, applied mathematics, machine learning, operations research, optimization, and other advanced statistical and mathematical domains.\n\nComputing researchers and practitioners work side by side to apply advanced theories, methods, algorithms, models, evaluation tools and testbeds, and computational-based solutions to address complex scientific challenges affecting energy, biological sciences, the environment, and national security. Core domain knowledge is beneficial, such as in the nuclear, biological, energy, materials, or chemical science spaces.\n*The hiring level will be determined based on the education, experience, and skill set of the successful candidate based on the following:**\n*Level II** : Leads specific tasks of the project to meet scope, schedule and budget. Expected to contribute professionally, building a professional reputation for technical expertise. Fully applying and interpreting standard theories, principles, methods, tools and technologies. Contributes technical content to proposals and develops business through excellent project performance.\n*Level III:** Manages small to moderate projects and/or major project tasks. Integrates intellectual and technical capabilities of work teams. Enhances technical/professional skills of junior staff through active mentoring and training. Generates ideas for new proposals and participates in business development activities\n*Minimum Qualifications**\n+ BS/BA with 2 years of experience, MS/MA with 0 years of experience, or PhD with 0 years of experience\n*Preferred Qualifications**\n+ BS/BA with 5 years of experience, MS/MA with 3 years of experience, or PhD with 1 year of experience, focused in statistics with experience involving increasing levels of scientific research, task management, and programmatic responsibility\n\n+ Position requires ability to apply theories and develop technical approaches with minimal oversight\n\n+ Position requires the ability to effectively team with scientists and engineers to develop creative solutions to complex problems\n*Equal Employment Opportunity**\nBattelle Memorial Institute (BMI) at Pacific Northwest National Laboratory (PNNL) is an Affirmative Action/Equal Opportunity Employer and supports diversity in the workplace. All employment decisions are made without regard to race, color, religion, sex, national origin, age, disability, veteran status, marital or family status, sexual orientation, gender identity, or genetic information. All BMI staff must be able to demonstrate the legal right to work in the United States. BMI is an E-Verify employer. Learn more at jobs.pnnl.gov.\n*_Please be aware that the Department of Energy (DOE) prohibits DOE employees and contractors from participation in certain foreign government talent recruitment programs. If you are offered a position at PNNL and are currently a participant in a foreign government talent recruitment program you will be required to disclose this information before your first day of employment._**\n*Other Information**\nDue to business needs and client space, US Citizenship is required:\n\nThe Pacific Northwest National Laboratory is subject to the Department of Energy Unclassified Foreign Visits & Assignment Program site, information, technologies, and equipment access requirements.\n\n_Directorate:_ _National Security_\n\n_Job Category:_ _Scientists/Scientific Support_\n\n_Group:_ _Appld Stats & Comp Modeling_\n\n_Opening Date:_ _2020-06-03_\n\n_Closing Date:_ _2020-09-01_
## 46 Read what people are saying about working here. \n\nJob Description:\n\nPosition Summary\n\nData Scientist role to work on model building and feature engineering for client protection AI/ML Projects.\n\nRequired Skills\n\nData Science, Machine Learning, Data Analytics\n\nDesired Skills\n\nNeural Network/Deep learning algorithms\n\nMachine learning algorithms (supervised, unsupervised learning)\n\nFeature engineering/Data Prep\n\nExperience in Python, Tensorflow, SparkML etc.\n\nShift:\n\n1st shift (United States of America)\n\nHours Per Week:\n\n40\n Bank of America Corporation is a bank holding company. Through its banking subsidiaries (the Banks) and various non-banking subsidiaries ...
## 47 Data Scientist 2 years Qualifications and Skills: ·      3+ years work experience in a big data domain including, data visualization, machine learning, data mining, preparation and modeling in both regression and classification. At least one significant work project using Natural Language Processing ·      Expert ability with Python and R data science libraries to include at least Pandas, SciKit Learn, Keras, and MLlib.
## 48 Position: Sr Data scientist Location:Â San Francisco, CA Contract: 6+months Senior Data Scientist with around 10 yrs. experience. Proven leadership experience in the domain of Data Science and Machine Learning/Artificial Intelligence Demonstrating strong programming skills in large-scale data analysis using Java, R, Python or related software Leveraging strong math skills and statistical knowledge to advanced data mining and data analysis activities related to next generation Cloud technologies Hands-on programming experience with one or more of the following: Java, Python, R, or related languages
## 49 Position:  Data Scientist(Analyst) Location:   Mclean, VA Duration:    Long Term  Required Skills: Python, AWS  Job Description: Credit Review Team-First line and Second line Focus on the models in production Moved to open source at Capital One-need to have experience in open Source software, Python, and R AWS-nice to have Should have model build experience-able to identify model risk Technical writing part is important Yearsâ\200\231 experience for Python-at least 2 years, depends on how in depth theyâ\200\231ve used it Data engineering skills-not required As long as they can identify the model and risk it doesnâ\200\231t have to be industry specific
## 50 POSITION PURPOSE:\n\nThe Data Architect/Data Modeler will be part of the OWI Analytic team. Our team works closely with our business partners, proactively developing and delivering analytic solutions.The Primary Deliverables include: Data architecture, data workflow logical models, physical source data models, data profiling reports and data audits, data definitions, data\ntaxonomy, data metrics, meta-data, data lineage, data cleansing services. Functions of our data team is to development and design, data quality management, reference and master data management, data integration, data governance.\n\nOWI is a fast-paced and dynamic organization. Individuals will be handling multiple activities in a team as well as\nindividual contributor environment.\n\nDUTIES, TASKS, AND RESPONSIBILITIES:\n\n·Activities: data modeling, data audit, detailed data design, define rules, establish master data solutions, define hierarchies, distribute reference and master data, profile, analyze, assess data quality, design data quality management procedures, clean and correct data quality defects, establish data governance.\n\n·Experienced utilizing HANA studio data modeler which should include initial setup, calculations and, calculation and analytical views, data base schemas, creating attribute user/role creation.\n\n·Setting up security, Analytic privileges, user integration services, SQL script, SLT, SDA, SDI and HANA Live solution models.\n\n·Demonstrate critical thinking, analytical skills, and employ judgment to offer thoughtful, concise input toward resolutions of problems.\n\n·Assist team building the BW4HANA modeler including standard and custom development and deployment of EWD design building BW objects e.g. Composite Providers, Advanced DSOâ\200\231s, ODS Views, Data Sources, Data Loading using SLT.\n\n·Be able to translate data requirements into business processes and reverse engineer business processes into data requirements Comprehension of DevOps and Agile development and application to data centric architecture and solutions\n\n·Leadership skills needed to successfully promote ideas, coordinate work activities, and plan deliverables\nwithin a project team.\n\nSKILLS, KNOWLEDGE, AND ABILITIES:\n\n·Advanced expert database handling languages (SQL and variants)\n\n·Expert in data modeling (physical and logical), ER diagrams, data dictionary, data map, normalize de-normalize,\nagile data modeling\n\n·Practical skill sets in BI and Data visualization tools\n\n·Knowledge of other programming languages and ability to write code (i.e. VBA, python, XML)\n\n·Working knowledge of ETL\n\n·Working knowledge of SAP\n\n·Basic Project Management and Business Process modeling\n\n·Minimum Bachelorâ\200\231s Degree in Engineering Technology, Computer Science, or closely related field.\n\n·A minimum of 3 yearsâ\200\231 experience in SAP Data Management and Governance, SAP Data Services (Data\nExtract, Data Profiling, de-duplication)\n\n·Advanced competency in SAP HANA, SQL Server Database, Tableau and Alteryx\n\n·Experience in transactional modeling and dimensional data warehouse model development with at least 4 years developing in HANA.\n\n·Work with other IM data modelers and ETL developers to design data loads into HANA\n\n·Excellent verbal and written communication skills with a strong focus on the ability to clearly articulate and discuss technical issues with both technical and business personnel.\n\n·Bachelor degree in Computer Science, Math or a related IT field or equivalent work experience in an IT field required\n\n·Strong analytical skills, able to effectively solve problems in a timely manner
## 51 Formation provides personalization for the largest enterprise businesses in the world. We work with high volume data streams to deliver tailored experiences and orchestrate physical and digital exchanges into a seamless journey. Our Data Science Team is transforming the way people engage across multiple industries.\n\nYou'll collaborate closely with engineers and share responsibility throughout the product life-cycle. You'll work in small, self-sufficient teams with a common goal: deliver excellent data science solutions anchored in a culture of quality, delivery, and innovation.\n\nAs a Data Scientist at Formation you're capable of working in an agile data science environment to generate and test hypotheses quickly. You're also passionate about improving, optimizing, and developing new reinforcement learning (RL) strategies to enhance the value of our platform.\n\nKey Responsibilities:\nApply RL and statistical analysis to complex, real-world problems through massive data sets\nIndependently design, test, and productionize RL-based experimentation to refine customer strategies\nCollaborate with colleagues in product and customer success roles, sharing responsibility throughout the product life-cycle\nAbility to explore an unfamiliar and large data sets with big data tools such as Hive or Spark\nPresent methodology and results to external stakeholders and Fortune 500 clients\nSkills and Experience:\nMinimum 2 years experience as a Data Scientist, prior software development experience is a plus\nMaster's or Ph.D. in a relevant technical field (e.g. Computer Science, Mathematics, Statistics, Physics)\nPrior experience with RL frameworks such as TensorFlow or Vowpal Wabbit, and Spark is a plus\nMachine learning knowledge with a focus on contextual bandits, reinforcement learning, recommender systems, knowledge of common Data Science concepts related to e-commerce (e.g. lifetime value, net incremental revenue, churn) is a plus\nDemonstrated ability to communicate and collaborate with peers\nDemonstrated skills in result-driven problem solving\nAbout Formation\n\nFormation is the global leader in developing scalable solutions for individualized offers. We drive business results by building and deepening the relationship between big brands and their customers.\n\nWe use artificial intelligence (AI) and machine learning (ML) algorithms to constantly analyze an audience and fine-tune offers. Our approach enables offers to become smarter and more effective with each customer interaction, resulting in a better experience.\n\nFormation strongly believes differing perspectives + passionate discourse achieve the greatest outcomes. We give our whole selves and we are building a team we can learn from. We are committed to inclusion and diversity and we are an equal opportunity employer. All applicants will receive consideration without regard to race, color, religion, gender, gender identity, sexual orientation, national origin, disability, or veteran status.\n\nAlso, we are thrilled to be named one of the Top 50 startups by LinkedIn! Every member of our team makes Formation a special place to work and grow. Come join us and see for yourself!
## 52 This is a 3 months contract position with my client here in the Bay area. we are looking for someone with strong background in machine learning and statistical analysis. This position requires a person who will guide and lead the team. Data Science professional with hand on knowledge of buidling and tuning deep learning and statistical models in python. Strong data analysis and analytical skills.. If you are interested, please send me your resume asap with a salary expectation and your availability.. Please email resume to seema@tekvalley.com, OR ankur@tekvalley.com  Â
## 53 Position Description:\n\nWant to make a difference in our world. Mathematica applies expertise at the intersection of data, methods, policy, and practice to improve well-being around the world. We collaborate closely with public- and private-sector partners to translate big questions into deep insights that improve programs, refine strategies, and enhance understanding. Our work yields actionable information to guide decisions in wide-ranging policy areas, from health, education, early childhood, and family support to nutrition, employment, disability, and international development.\n\nWe are looking for aData Scientist in our Washington DC office. Data scientists lead data processing and analysis tasks, such as monitoring data quality, developing documentation, applying statistical and data science methods, and creating data visualizations. Data scientists are expected to work on multi-disciplinary teams, overseeing and mentoring junior data scientists. The work of our data scientists supports our company's core offerings in program evaluation and data analytics, which yield crucial evidence and information for policy and decision makers.\n\nData scientists contribute throughout the course of a project on tasks such as the following:\nLeading multidisciplinary teams to answer research questions or build solutions that involve linking health or healthcare data (e.g., Medicare claims or HCUP) to other administrative data (e.g., Hospital Compare files)\nDesigning, planning, and overseeing the data science workflow on tasks and projects, involving descriptive statistics, machine learning or statistical analysis, data visualizations, and diagnostics using programming languages such as R or Python\nCommunicating results to collaborative project teams using data visualizations and presentations using tools such as Markdown (e.g., R Markdown), notebooks (e.g., Jupyter or Databricks), or interactive visualizations (e.g., R Shiny or Dash).\nDeveloping and implementing systems to ingest, process, and manage datasets\nDeveloping and maintaining documentation using Atlassian Confluence and Jira\nImplementing quality assurance practices such as version control and testing\nLeading or supporting proposal sections or applications\n\nPosition Requirements:\nDemonstrated enthusiasm for applying data science and statistics to social impact projects in academic, extra-curricular, and/or professional settings\nAn excellent academic record, including courses in subjects such as statistics, data science, math, computer science, and/or social science, and the following credentials:\nPhD, or 3+ years of experience in a social policy field post Masters or immersive bootcamp (e.g., Metis)\nMastery of R or Python to manipulate data, conduct analyses, and create data visualizations\nAbility and desire to work independently as part of remote, interdisciplinary teams\nAbility and desire to mentor junior data scientists and contribute to Mathematicas growing health data science community of 50+ staff\nAbility to version code using Git\nStrong oral and written communication skills.\nNice-to-have Skills:\n\nExperience with reproducible research principles, interactive visualizations, tidyverse, AWS, Google Cloud Platform, R Shiny, R Markdown, pandas, healthcare claims and administrative data (e.g., Medicare, Medicaid, electronic health records, all-payer claims databases, HCUP), and/or scikit-learn.\n\nPlease submit a cover letter, resume, and salary expectations. You will be asked to attach these materials during the online application process. Please click the "Apply Now" icon after the Position Description to attach your documents. Letter of recommendations not expected or required.\n\nMathematica offers our employees competitive salaries and a comprehensive benefits package, as well as the advantages of being 100 percent employee-owned. As an employee stock owner, you will experience financial benefits of ESOP holdings that have increased in tandem with the companys growth and financial strength. You will also be part of an independent, employee-owned firm that is able to define and further our mission, enhance our quality and accountability, and steadily grow our financial strength.\n\nVarious federal agencies with whom we contract require that staff successfully undergo a background investigation or security clearance as a condition of working on the project. If you are assigned to such a project, you will be required to obtain the requisite security clearance.\n\nWe are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class.
## 54 Cohere Health is simplifying healthcare for patients, their doctors, and all those who are important in a patient's healthcare experience. Our focus is to enable an efficient, transparent patient journey where patient goals are central to decision-making.\n\nWe are a mission-driven and fast-growing company obsessed with eliminating the wasteful friction patients and doctors experience, particularly for diagnoses that require expensive procedures or medications. To that end, we build products and services that ensure the appropriate plan of care is understood and expeditiously approved, so that patients and doctors can focus on health, rather than payment or administrative hassles.\n\nThis is a great opportunity for an outstanding data science professional to go in-house at a Series A healthcare technology company and learn what it takes to show the value of our products and services. You will work closely with analytics leadership, product and IT to support decision-making, and will dig into a wide range of strategic, product, and operational problems. The work will be fast-paced and project-based, with evolving needs - requiring scrappiness, flexibility, curiosity, and grace under pressure. Your work will enable Cohere to make the right investments in a critical stage for our company.\n\nThis role offers the potential to grow at Cohere in data science, or potentially move to other functions as well. You will contribute to healthcare analytics from the economic and quality perspective, help build an a company, as well as wear many hats. Our team values empowerment and is committed to developing our talent.\n\nWhat you will do:\nCollaborate cross-functionally to design appropriate data-use capabilities across a wide breadth of analytical needs\nUse your expertise in healthcare data to develop effective models to improve our product\nPartner with Engineering to deliver seamless modeling for product implementation\nWhat you will have:\n\n\n5-7 years model building and analytical experience at company where using ML/ AI were critical to the mission, preferably a healthcare company or allied health organization\nDirect experience partnering with Engineering to deliver the right analytical models to Product\nClear understanding of model building, model maintenance and the measures that optimize models for product use\nExcellent communication and collaboration skills\nProficient in R, SQL,Python and other common analytic/ data tools\nStrong knowledge of EMR data, SDoH, Claims, NLP use and other important healthcare related data sources\nProficient in current modeling approaches in machine learning and AI such as decision trees and Med2Vec, Patient2vec and other innovative approaches to analyzing data\nMaster's degree in STEM, public health, finance, economics, or other related field\nWe can't wait to learn more about you and meet you at Cohere Health!
## 55 Job Success Profile\n\nData Scientist\n\nBuckman is a privately held, global specialty chemical company with headquarters in Memphis, TN, USA, committed to safeguarding the environment, maintaining safety in the workplace, and promoting sustainable development. Buckman delivers exceptional service and innovative solutions to our customers globally in the pulp and paper, leather, and water treatment sectors to help boost productivity, reduce risk, improve product quality, and provide a measurable return on investment. Buckman is in the middle of a digital transformation of its businesses and focused on building the capabilities and tools in support of this.\n\nPosition Summary\n\n\nBuckman is seeking an experienced Data Scientist to lead the development of a Data Science program at our Memphis facility. You will work closely with Buckman stakeholders to derive deep industry knowledge across paper, water, leather, and performance chemical industries. You will help develop a data strategy for the company including collection of the right data, creation of the data science project portfolio, partnering with external providers, and augmenting capabilities with additional internal hires. A large part of the job is communicating and developing relationship with key stakeholders and subject matter experts to tee up proofs of concept projects to demonstrate how data science can be used to solve old problems in unique and novel ways. You will not have a large internal team to rely on, at least initially, so individual expertise, breadth of data science knowledge, and ability to partner with external companies will be essential for success. In addition to the pure data science problems, you will be working closely with a multi-disciplinary team consisting of sensor scientists, software engineers, network engineers, mechanical/electrical engineers, and chemical engineers in the development, and deployment of IoT solutions. If you like working for an entrepreneurial company with a Sustainability mission and digital ambitions at the core of its strategy, Buckman is the place for you.\n\nCompetencies Needed for Success\nBachelorâ\200\231s degree in a quantitative field such as Data Science, Statistics, Applied Mathematics, Physics, Engineering, or Computer Science\n5+ years of relevant working experience in an analytical role involving data extraction, analysis, and visualization and expertise in the following areas:\nExpertise in one or more programming languages R, Python, MATLAB, JMP, Minitab, Java, C++, Scala\nKey libraries such as Sklearn, XgBoost, GLMNet, Dplyr, ggplot, Rshiny\nExperience and knowledge of data mining algorithms including supervised and unsupervised machine learning techniques areas such as Gradient Boosting, Decision Trees, Multivariate Regressions, Logistic regressions, Neural Network, Random Forest, Support Vector Machine, Naive Bayes, Time Series, Optimization\nMicrosoft IoT/data science toolkit: Azure Machine Learning, Datalake, Datalake analytics, Workbench, IoT Hub, Stream Analytics, CosmosDB, Time Series Insights, PowerBI\nData querying languages (e.g. SQL, Hadoop/Hive) and Preferred Qualifications\nA demonstrated record of success with a verifiable portfolio of problems tackled\nPreferred Qualifications\nMasterâ\200\231s or PhD degree in a quantitative field such as Data Science, Statistics, Applied Mathematics, Physics, Engineering, or Computer Science\nExperience in the specialty chemicals sector or similar industry\nBackground in engineering, especially Chemical Engineering\nExperience starting up a data science program\nExperience working with global stakeholders\nExperience working in a start-up environment, preferably in an IoT company\nKnowledge in quantitative modeling tools and statistical analysis\nPreferred Work Style\nA strong business focus, ownership and inner self-drive to data science solutions to real-world customers with tangible impact.\nAbility to collaborate effectively with multi-disciplinary and passionate team members.\nAbility to communicate with a diverse set of stakeholders\nStrong planning and organization skills, with the ability to manage multiple complex projects\nA life-long learner who constantly updates skills\nLanguage: English\n\nLocation: Remote Work Opportunity - Flexible\n\nTravel: <10>
## 56 Acuity is seeking a Senior Research Statistician- Data Scientist to use statistical knowledge and research skills to solve business problems. The Research Statistician will develop, implement, and interpret statistical models with a strong emphasis on data mining and statistical/predictive modeling. Provides work direction.\nESSENTIAL FUNCTIONS:\nIdentifies and acquires additional data sources, both internal and external, that can be used to enhance analyses.\nLead the development of analytical models to drive superior business outcomes.\nDevelop in-depth understanding of drivers for optimization by utilizing statistics and data mining techniques.\nUsing latest PC tools, develop, enhance and monitor reports and models for other business areas.\nEvaluate and use Data Mining Tools.\nSupport, train, encourage, consult other areas in the company and provide actionable information to management.\nContinually monitor database information and future needs.\nExplore and acquire data from outside sources.\nUse database tools to support other departments\nRegular and predictable attendance.\nPerform other duties as assigned.\nEDUCATION:\nMasters/PhD in Statistics, Mathematics, Economics, Operations Research or Computer Science\nEXPERIENCE:\n5 years P&C insurance, modelling and project leader experience. Programming skills including SAS and R.\nOTHER QUALIFICATIONS:\nStrong interpersonal, quantitative, problem-solving, computer and conceptual skills.\nAptitude in predictive modeling, multivariate analysis, statistical modeling, data mining techniques and mathematical statistics.\nAbility to apply strong programming and data management skills.\nKnowledge of and experience with at least one major computer programming language or advanced syntax in a major statistical package.\nThis job is classified as exempt.
## 57 Role: Data Modeler/Data Scientist Location: Houston, TX. Â Job Description Develop entity relationships (ER models) for normalized and dimensional models Assist and guide existing Data Modelers and Data Architects Technical evaluation for assignments involving existing models for legacy applications and creation of new model in support of application development Work closely with the ETL, BI, DBA and other teams to develop appropriate data models Participates in collaborative working environment using the agile methodology Provide technical leadership and guidance to business partners, peers and vendor partners. OLAP Modelling
## 58 â\200¢ Should have data engineering and statistical modeling skills to conduct validation of credit models. (The models are mostly liner but some use advanced techniques like ensemble methods, gradient boosting and random forest.) â\200¢ Should be able to code in python or R to replicate sensitivity analysis or model monitoring metrics. â\200¢ Must have ability to test linear models using variable reduction techniques, conduct cross validation and tuning hyper-parameters. â\200¢ Must have experience with Sci-kit learn and Pandas.
## 59 The Data Analyst II is responsible for data entry and maintenance for our integrated enterprise system (SAP). As part of the Data Integrity team, this position is responsible for the accurate entry of data into SAP. The Data Integrity team is responsible for the efficient and effective management of part master data. This will require an individual with exceptional attention to detail, intermediate to advanced computer skills, in-depth knowledge of our business, products and processes.\n\nWORK PERFORMED\n\nTo perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required.\n\nEssential Functions of the Job:\n\nProduct Data â\200“ Responsible for managing product information levels to established standards\nMaintain data to meet company standards\nData includes images, technical specifications and other data elements\nEnsure automated data feeds successfully populate materials\nManually populate non-automated data as necessary via proprietary system\nIdentify system enhancements to improve automated processes and quality of information\nPerform audits to ensure accuracy of assigned manufacturers or categories\nCommunicate with providers and manufacturers to ensure complete product data is supplied\nRequest necessary information\nIdentify areas of improvement and provide feedback\nReview data standards for assigned manufacturers and/or categories\nImplement data standard improvements through manual and automated means to meet needs\nProvide customer service support for both external and internal customers\nSupport and validate teammate adherence to master data policies.\nMaterial Maintenance â\200“ Responsible for maintaining product catalog to established standards; Performing Price File Maintenance\nReview automated data feeds for successful materials creation\nEnsure materials are created correctly via download process\nIdentify and communicate reasons for automated failures to improve process\nManually process download failures\nDetermine if Sales requests meet Insight Catalog Policy\nReview opportunities and determine if it is appropriate business for Insight\nProvide guidance to address future opportunities\nEnsure materials are not added to the catalog that do not fit Insight strategy\nManually create non-automated materials proactively or reactively according to company policy\nCreate materials via proprietary systems and/or SAP\nDetermine appropriate source, costing and categorization\nProcess internal requests for creation\nDiscontinue materials that are no longer available and/or based on company policy\nOther projects â\200“ Responsible for completing additional projects and assignments as required by the department\nAll other assigned duties\nMINIMUM REQUIREMENTS\n\nEducation and/or Experience:\n\nAssociates degree or above from a college or university; or 3 years industry work experience.\n\nKNOWLEDGE, SKILLS, AND ABILITIES\n\nFollowing are the skills, knowledge and abilities necessary to perform this job\nAdvanced spreadsheet skills\nAbility to analyze and troubleshoot information for resolution\nAbility to work well in cross-functional teams\nAbility to multi-task in a fast-paced environment\nAbility to meet aggressive service-level agreements\nAbility to take a complex process or technology and create easy-to-follow copy\nExcellent written and verbal communication, with strong editing skills\nKnowledge of AP style\nVast knowledge of technical writing and styles associated with it\nCollaborating with marketing and IT on search engine relevancy as it relates to parts dataâ\200\231s influence\nProficiency with MS Word, Excel, Outlook and SAP.\nAbility to write reports and/or business correspondence.\nAbility to effectively present information and respond to questions from groups of managers, customers, other employees.\nAbility to effectively present information to other departments and manufacturer representatives.\nAbility to read, analyze, and interpret business documents.\nAbility to effectively communicate via email, phone and in person.\nAbility to understand processes and identify areas of improvement.\nAbility to multi-task and demonstrate strong organizational details.
## 60 The Business Intelligence Analyst is responsible for analyzing and reviewing data that is collected in our current and future products and services. Working with our Data Scientist team, the Business Intelligence Analyst would need to be able to take this data and apply to the markets our products are in for us to better serve our customers. This position will require a high degree of attention to detail along with ability to review calculations for quality assurance. The Business Intelligence Analyst must have good communication and organization skills, along with a working knowledge of day to day operations and priorities within their department and the company.\n\nTroubleshooting and problem solving issues, as well as working closely with managers and developers, to solve these problems, will be needed. Another responsibility could include creating internal reporting features to streamline information processing and sharing throughout the company. This position requires a high level of expertise with Microsoft Excel, and skills in SQL, Tableau, Microsoft Power BI, and/or R/Python are preferred. Assume other duties as assigned.
## 61 Our client is currently seeking a Data Scientist - C2H Main Criteria: Previous Predictive Analytics Experience, SAS, ETL, data processing, database programming and data analytics Extensive background in data mining and statistical analysis, Excellent pattern recognition and predictive modelling skills, Experience with programming languages such as Java/Python/R an asset. 5-7 years of experience required.
## 62 An agency located in San Jose, California is seeking a Data Scientist with 6+ years of industry work experience in data scientist projects. Skills/ Qualifications: Master's degree or higher in Statistics/Math/Computer Science or related field. Analysis of large amounts of historical data, determining suitability for modeling, data clean-up and filtering, pattern identification and variable creation Must showcase past work through published articles/GitHub/social media * SQL, Spark, R, Python, GitHub
## 63 Read what people are saying about working here. \n\n$4,372 - $10,478 a month\n\nPart-time\n\n(Job #19-07) Analyst/Programmer-12-Career, Data Scientist, $4,372-10,478/month. Appointments are typically made at the beginning of the salary range. This is a full-time, 12-month pay plan, exempt, permanent position with a one-year probationary period in the Office of Institutional Effectiveness. This position comes with an extensive benefits package that includes comprehensive medical, dental, and vision coverage, CalPERS retirement, Fee Waiver eligibility (reduced tuition on most CSU system classes), life insurance, and voluntary pre-tax health and dependent care reimbursement accounts. Additional benefits information can be found at https://hraps.humboldt.edu/employee-benefits.\n\nPosition Summary: The Office of Institutional Effectiveness (OIE) is responsible for the official collection, analysis, and reporting of institutional data on topics such as enrollment, retention and graduation rates and time to degree, faculty and staff characteristics, instructional and programmatic costs and productivity, financial peer comparisons, and other data for the purpose of providing accurate information to support transparent, evidence-based planning and decision making and a culture of assessment. Reporting to the Associate Vice President for Institutional Effectiveness, the Data Scientist collects, maintains, and analyzes the data needed to support evidence-based decision making by University administrators. This job utilizes broad knowledge of database structures, higher education data, and University practices. It requires the ability to create computer programs, synthesize data from a variety of sources, create and maintain relational and dimensional databases, and troubleshoot a variety of complex problems. The incumbent must be able to work independently as well as collaborate with other OIE staff and data knowledge experts from around campus.\n\nDuties: The Office of Institutional Effectiveness at Humboldt State University seeks a dynamic, collaborative, and organized individual to help fulfill its agenda. The duties of this position include, but are not limited to:\n\nResponsible for the continued maintenance, development, and expansion of the Officeâ\200\231s Strategic Data Repository (SDR) including creating the data structures and programming tools needed to extract, translate, load, and retrieve information\n\n-Create and maintain an ETL process, data extracts, and web reporting and associated statistical analyses, including short- and long-term enrollment projections\n\nSummarize and present data from the SDR and build models that project and interpret University data\n\nWork with other OIE staff and offices around campus to identify and extract data needed to support their efforts to make data-driven decisions\n\nMinimum Qualifications: (1) A basic foundation of knowledge and skills in technical information systems and application program packages, including a working knowledge of common software application packages, equipment platforms, reference database systems and sources, and training methods and a basic understanding of networks, data communication, and multimedia systems. This basic foundation may be obtained through EITHER a bachelorâ\200\231s degree in computer science, information systems, educational technology, communications, or related fields, OR similar certified coursework in applicable fields of study. Ability to (2) demonstrate competence in independently applying technical judgment to standard and nonstandard applications and systems, solving a wide range of problems and developing practicable and thorough solutions, and (3) effective communication and listening skills.\n\nRequired Knowledge, Skills, and Abilities:\n\nWorking knowledge of:\n\nRelational and dimensional database models\n\nOracle SQL+, PL/SQL\n\nSkills:\n\nManipulating data from a variety of sources\n\nTroubleshooting a variety of technical problems\n\nAbility to:\n\nOperate Windows and UNIX operating systems\n\nDesign data structures to maximize efficiency and flexibility\n\nDevelop and maintain systems in a combined PC/Unix environment that includes MS Access, Excel, Oracle, shell scripting, and web publishing and knowledge of Oracle SQL+, PL/SQL\n\nMaster new computer languages and systems\n\nDevelop statistical models to study enrollment trends and create projections predicting future enrollment\n\nCommunicate effectively, both orally and in writing, including technical documentation\n\nProvide summaries and descriptions of analyses tailored for a non-statistical audience\n\nDemonstrate a passion for public, higher education and its mission\n\nSupport and demonstrate a commitment to diversity and inclusion\n\nBe wholly committed to student success\n\nEstablish and maintain effective working relationships with staff, faculty and students from diverse backgrounds.\n\nPreferred Qualifications:\n\nExperience with University business practices, including recruiting, admissions, registration, financial aid, student support, human resources, and finance\n\nObtained advanced degree or one in progress which includes studies in Mathematics or Statistics\n\nApplication Procedure: To apply, qualified candidates must electronically submit the following materials via Interfolio (link below):\n\nLetter of Interest\n\nResume or Curriculum Vitae\n\nContact information for at least three professional references\n\nHSU Employment History Form (HSU Employment History Form: https://forms.humboldt.edu/employment-history-form)\n\nCLICK HERE TO APPLY NOW: http://apply.interfolio.com/60257\n\n(NOTE: Download the HSU Employment History Form and save it as a PDF file prior to filling it out. If the form is filled out on line (accessed via a web browser), content WILL NOT be saved).\n\nApplication Deadline: The deadline to submit application materials is 11:59 p.m. on Thursday, February 21, 2019. To be notified in the event this recruitment re-opens for a subsequent review of applications, send an email to careers@humboldt.edu that includes the job number (19-07) and applicantâ\200\231s last name in the subject line of the message.\n\nHSU is committed to enriching its educational environment and its culture through the diversity of its staff, faculty, and administration. Persons with interest and experience in helping organizations set and achieve goals relative to diversity and inclusion are especially encouraged to apply.\n\nIt is the responsibility of the applicant to provide complete and accurate employment information. Incorrect or improperly completed applications will not be considered for vacancies. Any reference in this announcement to required periods of experience or education is full-time activity. Part-time experience or education-or activities only part of which are qualifying-will receive proportionate credit. In accordance with applicable Collective Bargaining Agreements, preference may be given to the campus applicants covered by these agreements. However, positions are open to all interested applicants, both on and off campus.\n\nEvidence of required degree(s), certification(s), or license(s) is required prior to the appointment date. A background check (including a criminal records check, employment verification, and education verification) must be completed satisfactorily before any candidate can be offered a position with the CSU. Certain positions may also require a credit check, motor vehicle report, and/or fingerprinting through Live Scan service. Adverse findings from a background check may affect the application status of applicants or continued employment of current CSU employees who apply for the position.\n\nHumboldt State University is committed to achieving the goals of equal opportunity and endeavors to employ faculty and staff of the highest quality reflecting the ethnic and cultural diversity of the state. Additional information about Humboldt State University can be found at www.humboldt.edu.\n\nHumboldt State University is a Title IX/Affirmative Action/Equal Opportunity Employer. We consider qualified applicants for employment without regard to race, religion, color, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, genetic information, medical condition, disability, marital status, protected veteran status, or any other legally protected status.\n\nCompliance with the California Child Abuse and Neglect Reporting Act (CANRA) and CSU Executive Order 1083 Revised July 21, 2017 (EO 1083) is a condition of employment. CSU employees in positions with duties that involve regular contact with children or positions which supervise such employees are designated as Mandated Reporters under CANRA and are required to comply with the requirements set forth in EO 1083. Upon appointment to this position, the successful candidate(s) will be notified of and required to acknowledge their CANRA reporting status.\n\nAdditionally, all CSU staff and faculty receive training annually on their obligations in responding to and reporting incidents of sexual harassment and sexual violence. You will be notified by email when you are required to take this mandated training.\n The California State University offers unlimited opportunities to help students fulfill their personal and professional goals. \n\nAt our...
## 64 Job Title: - Sr. Data Scientist (Statistical & Python) Duration: 1+ Yrs (will get extend) Mod of interview â\200“ Skype /Face to face Location: Austin, TX  Description Excellent understanding of machine learning techniques and algorithms Experience with common data science toolkit Deep learning + Computer vision with experience of using Open CV libraries and Python/Java  ///////////***********Any query please call on 408 550 1287*******//////////////
## 65 Required skills: Data Science experience should have experience in Hadoop, Spark & Hive Building models for highly imbalanced data sets Advanced Statistics/Math Dynamic presentation presence Programming: Microsoft T-SQL SSIS, SSASÂ
## 66 PulsePointâ„¢, a global programmatic advertising platform with specialized healthcare expertise, fuses the science of programmatic targeting, distribution, and optimization with the art of brand engagement. The PulsePoint platform is powered by terabytes of impression-level data, allowing brands to efficiently engage the right audiences at scale while helping publishers increase yield through actionable insights.\n\nOur organization has a strong history of utilizing machine learning, contextualization, and targeting to distribute advertising to the right consumers at the right time and create real connections across the internet. We are now taking that knowledge and expertise to solve challenges within healthcare in order to create better health outcomes through Radical Health Personalizationâ„¢.\n\nThe goals of the PulsePoint Data Science team:\nOptimize and validate targeting mechanisms for specific health conditions;\nImprove and optimize our proprietary contextualization and recommendation engines that handle millions of transactions per second, trillions each month;\nImprove and optimize our buying platform to ensure cost efficiency and to deliver ad campaigns within budget, target and time constraints;\nCollaborate with internal Health experts to design and support rapid assessment, analysis, and prototyping of ideas for achievable commercialization.\nWhat you'll be working on:\nImprove existing or develop new traffic segmentation algorithms and estimations of bid landscapes within each segment;\nOptimize real-time bidding strategies to efficiently spend ad budgets delivering campaign targets given various constraints;\nSupport and enhance the existing work on health user profiling, prediction, and targeting tools;\nImprove page contextualizer technology: work with healthcare topics detection algorithms, keywords/phrases extraction, general and aspect-based sentiment analysis;\nContribute on projects relating to patient/physician identity for cross-device tracking, profiling and targeting;\nSupport existing codebase for data integration and production support for our core models.\n\n\n\nWhat you need to be successful in this role:\n3+ years of full-time experience working as a Statistician/ Machine Learning\nEngineer/ Data Scientist;\nAdvanced knowledge of Big Data technologies such as Hadoop, Hive/Impala and\nSpark;\nAdvanced knowledge of Python using the numpy/scipy/pandas/sklearn stack;\nAdvanced knowledge of classical ML models (logistic regression, decision trees,\nboosting, bagging, SVM, Bayesian methods, etc) and at least basic knowledge in\ndifferent Neural Network models (CNN, RNN, auto-encoders, transformers);\nBeing confident user of Unix-like systems, Dockers, git, bash;\nMS/PhD in Applied Mathematics, Statistics, Machine Learning, Computer Science,\nPhysics; or BS with several years of applied machine learning experience\n\nWhat we offer:\nSane work hours\nGenerous paid vacation/company holidays\nVacation reimbursement, sabbatical, pawternity leave, marriage leave, honeymoon bonus\nComprehensive healthcare with 100%-paid medical, vision, life & disability insurance\n$2,000 annual training and development budget\nComplimentary annual memberships to One Medical, NY Citi Bike and SF Ford GoBike\nMonthly chair massages\nFree fitness classes (spin, yoga, boxing)\nGym reimbursement, local gym membership discounts\nOnsite flu shots, dental cleanings and vision exams\nAnnual company retreat\nPaid parental leave and a lot of new parent perks\nEmergency childcare credits\n401(k) Match and free access to a financial advisor\nVolunteer Time Off and Donation Matching, ongoing group volunteer opportunities\nTeam lunches, Sip & Social Thursdays, Game Nights, Movie Nights\nHealthy snacks and drinks\nAnd there's a lot more!
## 67 Data Analyst\nLocation\n\n\nBoston\n\nBusiness Function\n\n\nPortfolio Services\n\nApply Now\n\nJob Description:\n\n\nEnterprise Data Support Services is a newly formulated centralized group that is going to play a pivotal role in building the next generation of tools and platforms to address the data needs for end users and applications through the creation, maintenance and use of an Enterprise level data platform. This platform is being built to not only support data but also provide capabilities of centralized calculation engines for data accuracy and data consistency. This position reports into the Manager of Enterprise Data Support Services as part of the organization under the Chief Data Officer.\n\nThe Data Analyst will serve as a flexible generalist by providing data analysis and analytical insight based on the requests from Investment and Business Development teams. The individual will utilize data analysis tools such as Excel, SQL, R or Python to collate data from various sources, analyze the information and provide insights and analysis to the requesting teams. In conjunction with IT, the individual will verify any new data being added to the Central Data Repository to sign-off on data completeness and accuracy from a business domain perspective.\n\nPrincipal Duties :\nWork with investment teams to understand their data needs and translate them into a process to collate and analyze data to provide insights\nUse statistical models and industry recognized investment data calculation methodologies to perform various data analytics in SQL and MS Excel\nExercise relational database knowledge to construct and execute SQL queries to be used in data analysis activities\nUse other tools such as R and Python to manipulate data to address the information needs across the organization\nBuild and maintain strong relationships across the firm to understand CAâ\200\231s business goals and prepare for the organizationâ\200\231s evolving data needs\nContribute to additional projects where relevant domain or data analysis expertise is needed\nQualifications:\n\n\nBachelorâ\200\231s Degree, with preference to Data Analytics, Information Systems, or Mathematics\nDemonstrated skill in SQL usage as part of data analysis\nWorking knowledge of R or Python is desirable\nWorking knowledge of relational database design concepts\nDemonstrated track record of working autonomously in a highly-productive environment, executing with comfort in gray areas, and executing across multiple competing deadlines with fast-paced delivery targets\nExcellent organizational, written and verbal communication skills\nWillingness to learn and embrace a â\200\230jack of all tradesâ\200\231 mentality to build expertise and data depth across the entire Enterprise Data Support Services product suite\nDemonstrated progression towards CIPM, CFA, and/or CAIA certification is a big plus\nDirect experience integrating data from multiple sources such as client, fund, and market data domains strongly preferred
## 68  Title: Data Scientist Location: Costa Mesa, CA Duration: 12 Months  Job Responsibilities Perform applied statistical research and custom analysis on ACE data, interpret results, and make recommendations based on analytic findings, Develop and test ·hypothesis based on analysis of data sets, collaborate with internal and external partners and organizations in the design and execution of analytic studies to answer business: and research questions, bachelors or Masters in Engineering, Statistics, Business, 3+ years as a Data Scientist, experience in R, Python, SQL, SAS, NLP
## 69 Position Summary\n\nIndividuals within the\nBusiness Intelligence (BI) Analyst I role provide business intelligence,\nreporting, and data analysis needs for Customer Service and Delivery. They work\nclosely with BI Consultants, end users, and IT teams to turn data into critical\ninformation and knowledge that can be used to make sound business decisions. It\nis essential that BI Analysts build an understanding of the CSD organization at\na level of detail that enables them to identify and address critical issues.\nThey provide data that is accurate, congruent and reliable, and ensure the\ninformation is easily available to users for direct consumption or integration\nwith other systems. BI Analysts educate and train business partners and clients\nto use the data as an analytical tool, displaying the information in new forms\nand content for analysis and option exploration.\n\nBI Analysts work with\nusers to determine business requirements, priorities, define key performance\nindicators (KPI), and develop BI and data warehouse (DW) strategy. They conduct\nanalyses of functional business processes and functional business requirements\nand participate in the development of business cases in the support of process\nchanges. This includes working with business and development teams to design,\nand document dashboards, alerts, and reports. This individual is accountable\nfor providing leadership and independent initiatives in facilitating\ninformation gathering, structured documentation and presentation of findings to\nall levels of management.\n\nBI Analysts understand\nhow data is turned into information and knowledge and how that knowledge\nsupports and enables key business processes. They must have understanding of\nthe business environment and an interest in going beyond the obvious, delving\ninto the source, the definition, philosophy, and foundational roots of a data\nelement to create information. They must work well within a team environment.\n\nCandidate Responsibilities\nProvides design support\nfor the development of business intelligence solutions.\nWorks on small to\nmid-sized and cross-functional IT and business intelligence solutions.\nWorks on multiple\ntasks/projects as team member.\nParticipates in\nworkstream planning process including inception, technical design, development,\ntesting and delivery of BI solutions.\nMay participate in\nproject management estimation process.\nWorks with internal and\nexternal customers and IT partners to develop and analyze business intelligence\nneeds.\nProvides input to\nbusiness requirements for the design of solutions.\nInterprets business\nrequirements and determines optimum BI solutions to meet needs.\nIdentifies and provides\ninput to new technology opportunities that will have an impact on the\nEnterprise wide BI systems.\nDesigns Institute-wide\nviews and custom reports.\nMay perform analysis\nfor a wide range of requests using data in different formats and from various\nplatforms.\nResearches business\nproblems and creates models that help analyze these business problems.\nProvides input to the\ndevelopment of information quality metrics.\nResearches tools,\nframeworks and mechanisms for data analytics.\nAdheres to current\nstandards.\nProvides input to\nstandards, policies and procedures for the form, structure, and attributes of\nthe BI tools and systems.\nDesigns and delivers\nend-user training and training materials.\nTrains users to\ntransform data into action-oriented information and to use correctly.\n\nFunctional Skills\nGeneral knowledge of\nsupported business functions, systems, and transactions\nFamiliar with\ntechnology system processes, reporting functions, and methodologies\nWorks\neffectively with associates from across the CGM Profit Center and with business\nand IT partners
## 70 We deliver our customers peace of mind every day by helping them protect what they value most. Our passion for placing the customer at the center of everything we do is driving a transformational shift at Liberty Mutual. Operating as a tech startup within a Fortune 100 company, we are leading a digital disruption that will redefine how people experience insurance.\nJob Introduction:\n\nThe Analytic Platform and Service organization is looking to hire a Machine Learning Engineer to their team. This candidate will participate in the software development within an Agile team and work on the internally developed analytics application.\nAbout the job:\nBuild out our Real Time Scoring platform and working with our data scientist partners to support the deployment of predictive models and deliver business value from AI and Machine Learning.\nDevelop Java application that is intended to allow analytics teams to quickly create, manage and terminate their own scalable high-performance cloud computing and storage environments to support analytics, machine learning and big data projects.\nWork directly with data science teams in all markets to assist them with their projects and gather new requirements and insights to support the ongoing development.\nRespond quickly to change, pick up new technologies and adapt to changing requirements.\nDemonstrate competence at writing, testing and debugging Python and Java code.\n\nDesired skills:\n\nBachelor's or Master's degree in technical or business discipline or equivalent experience.\nMinimum 2+ years of professional development experience in Python and Java.\nFoundational knowledge or proficient with data science and machine learning tools and techniques\nExperience in new and emerging technologies such as AWS, Azure and Cloud, DevOps, CI/CD and Microservices.\nExcellent analytical, problem solving, and communication and collaboration skills.\nGeneral knowledge of agile software development concepts and processes.\nMust be proactive, demonstrate initiative and be a logical thinker.\nExperience with layered system architectures and layered solutions; understanding of shared software concepts.\nHighly competitive candidates will have:\n\nR Language\nSpark\nGo Language\nWe take care of our employees...\n\nWe strongly believe that a great job should keep you happy both at work-and in life. That's why we offer:\nWorkplace Flexibility\nWellness Perks\nCollaborative workspaces\nSit/stand desks\nCareer development, programs and classes\nDiversity & Inclusion programs\nCommuter Benefits\nAdoption Assistance\nCollege Savings Plan\nEducation reimbursement\nHackathon Events\nLiberty Mutual was named as a '2016 Great Place to Work' by Great Place to Work US.\nFor more info about our benefits -\nhttps://libertymutualgroup.com/careers/working-here/benefits\nLearn more about Tech at Liberty Mutual - http://www.jobs.libertymutualgroup.com/careers/technology-jobs\n\nCheck out our Tech at Liberty Mutual YouTube playlist - https://www.youtube.com/playlist?list=PLxUNmyJ_IIGx9yoUJfQ8k5APAK3-KAa6j
## 71 I am working with a premier Data Science Consulting firm in Princeton, NJ that is looking to bring on experienced full-time, permanent Data Scientists to continue to grow their successful practice. They are working with very reputable clients on high level Machine Learning projects utilizing the newest technologies. There is NO travel for the role, Benefits include: *Competitive salary + Bonus *Unlimited Paid Time Off! *Opportunity to work from home when needed *401(k) with company match up to 7% *100% medical benefits paid for the by the client *Real opportunity to move up within the company! Please send your resume to k.lenahan@jeffersonfrank.com
## 72 Whatâ\200\231s significantly better than working on a typical data science team? How about working on a data science team in which youâ\200\231re directly making an impact in the revolutionary field of artificial intelligence even as an entry level team member? (well, statistically significant that is). Pardon the pun, but at Spectrum weâ\200\231re certain that our team is pumped up to work not only with like-minded data-savvy and fun loving professionals, but also to work with cutting edge new tools like our predictive, artificially intelligent proprietary software. So, if your confident that you want to make a direct impact in your next job today, then please keep on reading.\n\nResponsibilities\n\nBeyond working with state of the art technology you will have many different fantastic projects to work on as a Data Scientist at Spectrum. Here are just a few different responsibilities you can expect off the bat:\nWork with IT teams, management and/or data scientists to determine organizational goals\nMine data from primary and secondary sources\nClean and prune data to discard irrelevant information\nAnalyze and interpret results using standard statistical tools and techniques\nPinpoint trends, correlations and patterns in complicated data sets\nIdentify new opportunities for process improvement\nProvide concise data reports and clear data visualizations for management\nSome Characteristics That Define You\n\nWe understand that as a Data Scientist for Spectrum, you have many different professional goals and personal interests. As such here are just a few different things that typically define our team members on the Data Science team:\nAnalytical. In order to solve problems and build innovative new digital marketing campaigns, it is essential that you know how to take an idea and analyze it from all of its angles.\nPatient. As a data scientist, you know that you work with extremely large data sets on a daily basis. As such we are looking for someone who is not only meticulous, but patient enough to sit and sift through that data in a thorough way.\nCreative. Beyond just analyzing data sets, you are an explorer and a puzzle solver. Pulling insights out of your data and understanding how those insights can better shape our tools is something that you live to do.\nStudent. More so than most industries, the field of data science is always changing and evolving. As such, you are always looking to learn new things and gain new skills.\nBusiness-Savvy. Beyond the wicked data science skills you bring to the table, we also want you to consider the business implications of our data tools. From the ways our team will use them to how our customers will use them, we always want you to keep the user and the business application in mind.\nRequired Skills and Experience\n\nOn top of the many intangible skills you bring to the table, there are many skills that can help improve the efficiency and success of your work at Spectrum. Here are a few of those required skills and experience that you will come in with as a Data Scientist on our team:\nA bachelorâ\200\231s degree/pursuing a bachelor's degree in computer science, mathematics, statistics, information systems, or a related field\nExperience with statistical modeling\nFundamental knowledge of R and/or SAS languages\nExperience with SQL databases and database querying languages\nExperience with data mining and data cleaning\nExperience with data visualization and reporting techniques\nWritten and verbal expression\nBenefits\n\nAs a Data Scientist at Spectrum there are a ton of fantastic perks and benefits that come along with your work. Here are just a few of the benefits you can expect when joining the Spectrum family:\nComprehensive medical & dental insurance\nRetirement planning & company matching\nGenerous PTO, including sick days & holidays\nA state-of-the-art office environment\nNintendo Switch in-office gaming such as FIFA, Arms, Mario Kart, and Rocket League\nYear-round gym memberships\nPaid continuing education\nCasual dress code\nFlexible scheduling\nFree-Lunch-Friday\nCompany sponsored parties and group activities outside of the office
## 73 Who is Cenlar?\n\nYou are.\n\nEmployee-owners have made Cenlar one of the nationâ\200\231s largest mortgage subservicers. We have achieved success by empowering people with company ownership, real programs that provide avenues for advancement, and a great atmosphere that makes everyone look forward to the workday. Get your share of our success by considering the opportunity to join our team as a Data Analyst I.\n\nJob Summary:\n\nResponsible for working with large and complex data sets (both internal and external data) to evaluate, recommend, and support the implementation of business strategies. Identifies and compiles data sets using a variety of tools (e.g. SQL, Access) to help predict, improve, and measure the success of key business to business outcomes. Responsible for documenting data requirements, data collection/processing/cleaning, and exploratory data analysis; which may include utilizing statistical models/algorithms and data visualization techniques. Incumbents in this role may often be referred to as Data Scientists.\n\nResponsibilities:\nDay-to-day actions are focused on administering defined procedures, analysis and report preparation.\nCommands knowledge of data elements across the enterprise and industry â\200“ definitions, lineage and usage\nExecutes creation of tabular operational reports and aggregation reports with medium to high complexity.\nCommands intermediate to expert level t-SQL coding proficiency\nPerforms data mapping for systems integration, data provisioning, and data consumption\nResponsible for documenting metadata, data lineage, data definitions and business glossary entries.\nHandles basic data transformation tasks such as date and decimal formatting, rounding of numeric values, and conversion of numeric values to percentages.\nHas intermediate to expert level SSIS ETL build proficiency\nWorks with Data stewards to enforce adherence to standards for enhancements and new repositories.\nQualifications:\nExperienced professional with relevant and current skills (3-5 years).\nPrimary focus is on daily deliverable of routine and defined outputs, while at the same time developing knowledge of the broader context in which the work is being performed.\nBachelorâ\200\231s degree in computer science, finance, mathematics, statistics (or similar) or equivalent work experience\nTotal Rewards:\n\nCenlar FSB offers outstanding benefits which may include paid medical/dental/life insurance, 401k, employee ownership, tuition assistance, a supportive work environment, and genuine opportunities for advancement. Cenlar is a Drug Free Workplace and an Equal Employment Opportunity/Affirmative Action Employer -- M/F/D/V/SO.\n\nVisit www.cenlar.com for more details.\n\nPlease apply online.
## 74  Title: Senior Data Scientist Location: San Jose, CA Duration: 6 months+ Rate: DOE/hr  We urgently need a Senior Data Scientist consultant, someone with strong background on machine learning and statistical analysis. More details to contact Nagesh, nrao@qcentum.com, 469-333-6042, 469-546-9280 Â
## 75 Description: Looking for candidates who have Experience in Data Scientist for 3-4 Years. Must have Hadoop. Must have Data Modeling .  Regards, Shanthala| ePro InfoSystems Office:  510-344-2343 Email: shanthala@eproinfosystems.com  www.eproinfosystems.com.  Â
## 76 Job Title: Data Scientist (Risk)\n\nLocation: Orlando, Florida\n\nManager: Head of Data Science\n\nAbout LSQ\n\nLSQ is a technology-driven provider of accounts receivable financing to companies who need working capital but may not be able to obtain sufficient financing from their bank. Our focus is to help businesses release the liquidity tied up in their accounts receivable. With financing from LSQ, a business can purchase more inventory, fill more orders, and take advantage of new growth opportunities. Our technology and data driven approach to providing working capital, along with our accounts receivable management services, allows our clients to driving business success.\n\nJob Overview\n\nLSQ is searching for a Data Scientist (Risk) to join our growing team of analytics experts. The hire will be responsible for building data driven tools to automate and scale risk decisions decision points include, fraud detection, commercial credit analysis, and invoice payment behavior. The ideal candidate is an experienced researcher and data wrangler who enjoys applying complex theories to solve real world business problems. The Data Scientist will need to collaborate effectively with both technical (engineers, data experts) and non-technical (business users) colleagues to bring the data to life.\n\nResponsibilities of Data Scientist (Risk)\nWe have 20+ years of real-world commercial data weve observed businesses and their interactions with other businesses across market cycles, industries, and catastrophic events. Build risk tools that optimize for velocity and scale.\nBuild risk tools that optimize science & art you will work with risk experts who have been in the trenches (and learned from losses). Augment human decisions with machine.\nLay the foundation for scale. A data science framework for optimal data acquisition, model training and deployment. Dont take short-cuts. Bring quantitative and statistical rigor to your body of work.\nLead by example. You are joining a data team at ground-level. Building and inserting a data team, into a 20+ year old company (organism) will be hard, but rewarding.\nQualifications for Risk Data Scientist\nCuriosity, Grit, and Humility\n5+ years of experience of hands-on data science role\nPrevious experience of building commercial finance risk tools should be beneficial, but not required\nBachelor or Masters degree in Data Science, Operations Research, Computer Science, Industrial Engineering, Statistics, or another quantitative field\nA toolkit of modern data science techniques\nExperience with any data science tools/packages: Python, R, SAS, XGBoost, TensorFlow, NLTK\nExperience with any big data tools: Hadoop, Spark, Kafka, Data Bricks\nExperience with any reporting tools: Tableau Server, Power BI, SSRS, Excel\nExperience with any AWS cloud services: EC2, EMR, RDS, Redshift, Aurora, S3\nExperience with any Stream-processing systems: Storm, Spark-Streaming\nPosition Type and Expected Hours of Work:\n\nThis is a full-time position. Days and hours of work are Monday through Friday, 8:00 a.m. to 5 p.m. Occasional evening and weekend work may be required as job duties demand.\n\nPhysical Demands:\n\nWhile performing the duties of this job, the employee is regularly required to sit and use hands to finger, handle, or feel. The employee is frequently required to reach with hands and arms and talk or hear. The employee is occasionally required to stand; walk and stoop, kneel, crouch.\n\nTravel:\n\nThere will be minimal travel required for this position.\n\nLSQ is an Equal Opportunity Employer that does not discriminate on the basis of actual or perceived, race, religion, color, sex (including pregnancy and gender identity), sexual orientation, parental status, national origin, age, disability, family medical history or genetic information, political affiliation, military service, any other non-merit based factor or any other characteristic protected by applicable federal, state or local laws. Our leadership team is dedicated to this policy with respect to recruitment, hiring, placement, promotion, transfer, training, compensation, benefits, employee activities and general treatment during employment. If youd like more information about your EEO rights as an applicant under the law, please click here http://www1.eeoc.gov/employers/poster.cfm
## 77 ArsenalBioâ\200\231s mission is to develop efficacious and safe cellular therapies for patients with chronic diseases, initially cancer. Our products are being designed to herald the transition of adoptive cell therapy from a hospital based treatment to outpatient therapy, like most other cancer treatments. Our people are our greatest asset, they bring scientific talents in molecular biology, immunology, pharmacology, protein chemistry, computational biology, automation, genome engineering, software and other fields to make the future happen now.\nWhat You Will Do:\nLead the development, application and optimization of DNA constructs for genetic modification of cell lines and primary immune cells\nDesign and develop DNA constructs and libraries encoding affinity reagents, receptors and synthetic circuits for high-throughput functional screens\nWork with interdisciplinary teams to establish and test novel adoptive cell therapy approaches\nProvide expert support for molecular biology applications and strategies to the broader scientific team\nMaintain familiarity with the relevant current literature and its application to synthetic biology and genome engineering\nCoordinate tasks across multiple projects, demonstrating prioritization and planning\nInterpret and effectively execute experiments in line with project timelines and goals\nMentor research associates within the group in areas of technical expertise\n\nWhat You Will Bring:\nPhD in molecular biology or a related field, with 3+ years of relevant research experience in industry or academia\nExperience developing/optimizing protocols for nucleic acid cloning, amplification, modification and analysis\nExperience optimizing/developing protocols for DNA assembly and library constructions for applications such as synthetic biology, functional genomics and/or protein engineering\nWorking knowledge of vector design for gene expression, genetic reporter systems and genome engineering applications\nWorking knowledge of next-generation sequencing technologies and workflows\nWorking knowledge of automation and workflows\nEntrepreneurial behaviors demonstrated by agility, accountability, transparency, resourcefulness, and ability to get things done\nAbility to quickly learn new skills and knowledge on the job, demonstrate productivity, and deliver high-quality results\nFlexible mind with ability to think outside the box, creative approach to problem solving, and comfort in working with ambiguous data in an open-ended, exploratory process\nWe are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status\n\nTo all recruitment agencies: ArsenalBio does not accept agency resumes. Please do not forward resumes to our jobs alias, or Arsenalbio employees. ArsenalBio is not responsible for any fees related to unsolicited resumes.
## 78 Lexington, MA, US\n\n#job-location.job-location-inline {\ndisplay: inline;\n}\n\nCompany:\nMIT Lincoln Laboratory\nJob Summary\nResponsible for the collection, synthesis, and analysis of employee data to meet organizational objectives. Gains insight into key Human Resources related business opportunities and deliverables by applying statistical analysis techniques to examine structured and unstructured data from multiple disparate sources. Interprets results from multiple sources using a variety of techniques, ranging from simple data aggregation via statistical analysis to complex data mining. Uses advanced mathematical, statistical, querying, and reporting methods to develop solutions. Develops information tools, algorithms, dashboards, and queries to monitor and improve business performance.\n\nSuccessful candidates will have demonstrated:\nStrong interpersonal, team building, communication and presentation skills to effectively communicate and collaborate with all levels of the organization.\nAn ability to maintain strict confidentiality as well as business and professional ethics.\nStrong organizational skills, attention to detail and ability to handle multiple tasks.\nFlexibility and ability to work effectively and meet deadlines in a dynamic, fast-paced environment.\nAbility to perform successfully in a data driven, service-oriented environment and to interpret rules and guidelines flexibly to enhance the business values and the diverse culture.\nRequirements\nBS required, MS preferred in related disciplines such as Computer Science, Statistics, and/or Mathematics with 5-8 years of relevant experience.\nExperience working with sensitive data, predictive analytics and behavioral science based information preferred.\nMust have demonstrated experience with some or all of the following: Data and Quantitative Analysis / Decision Analytics / Predictive Modeling / Data-Driven Personalization / KPI Dashboards and BPI Plans / Big Data Queries and Interpretation / Data Mining and Visualization Tools / Machine Learning Algorithms / Business Intelligence (BI) / Research, Reports and Forecasts.\nIn addition, must have experience with some or all of the following: Data and Computer Science proficiency: SPSS, SAS, R, Rstats, Python, Apache Spark, Microsoft PowerBI, Matlab, HIVE, Pig, data warehouse (DW, DWH), Hadoop, Google Analytics, data mart projects, data management, machine learning, predictive analytics, prescriptive analytics, streaming analytics, SQL, Business Objects (Crystal Reports, Dashboards, WEBI), SAP BW, SAP HANA, SAP Lumera, and many and/or other products is a plus.\n\nFor Benefits Information, click http://hrweb.mit.edu/benefits\n\nSelected candidate will be subject to a pre-employment background investigation and must be able to obtain and maintain a Secret level DoD security clearance.\n\nMIT Lincoln Laboratory is an Equal Employment Opportunity (EEO) employer. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, age, veteran status, disability status, or genetic information; U.S. citizenship is required.\n\nRequisition ID: 28943\n\nNearest Major Market: Boston\nJob Segment:\nDatabase, Scientific, Scientist, ERP, Developer, Engineering, Technology, Science
## 79 Job Description\n\nThe Cooking Lab is the publisher of both Modernist Cuisine: The Art and Science of Cooking (2011) and Modernist Cuisine at Home (2012). Our interdisciplinary team in Bellevue, Washington includes scientists, development chefs, and a full editorial department, as well as business and marketing staffâ\200”all dedicated to advancing the state of the culinary arts through the creative application of scientific knowledge and experimental techniques. In addition to our award-winning Modernist Cuisine books, The Cooking Lab provides consulting, R & D, and invention services to food companies and culinary equipment makers, large and small. Our research laboratory in Bellevue includes one of the best equipped kitchens in the world as well as access to a full set of machining, analytical, and computational facilities, provided by the Intellectual Ventures Lab.\n\nThe Cooking Lab is looking for a Data Scientist to join their journey to understand food and cooking. In addition to its primary research, MC is very interested in analyzing the received wisdom of recipes from around the world and over time. The Data Scientist will be responsible for leading an audacious effort to catalog, ingest, and analyze the world's pastry recipes.\n\nThe Data Scientist will apply computer vision, OCR, natural language processing, and other techniques to transform formatted recipes into structured data. He or she will lead further development of our pipeline for tagging unstructured data to train machine learning algorithms to identify new recipes. Then, he or she will perform analysis on the resulting structured data to answer specific questions about the data. For instance, "What is the distribution of baking temperatures recommended for a croissant?" Or, "How has the amount of salt in chocolate chip cookie recipes varied over time?"\n\nIn addition to the recipe analysis responsibilities, the Data Scientist will work with the culinary team for a variety of other scientific analysis and visualization projects.\n\nResponsibilities:\nExpand our in-house framework for catalog of pastry recipes\nFurther develop our methodology for schematizing, codifying, and analyzing recipes\nImplement processes, including human tagging, machine learning, and quality evaluation for codified recipes\nMine data for useful insights and storytelling\nSupport culinary team in scientific analysis and visualization projects\nKey Qualifications and Required Skills:\nExperience with natural language processing and machine learning\nProficiency in software development (.NET, C/C++, Python, Java)\nAbility to design data processing pipelines, storage structures, and automation for both on-premise and cloud computing environments\nExperience with extraction, transformation, and loading (ETL) of large datasets for ad-hoc analysis in Excel, Matlab, or other packages\nFamiliarity with computer vision and OCR software\nPassion for cooking, culinary history, or culinary science a plus\nWe are an equal opportunity employer
## 80 Title: Senior Data Scientist Location :San Jose, CA Long term Contract Need a Senior Data Scientist with strong background on Machine Learning and Statistical Analysis.  Suitable candidates can APPLY for this job. You can also email your updated resume to jaya.prakash (at) ameexusa.comÂ
## 81 Role: Data Scientist - Machine Learning Location: Atlanta, GA Experience: 6+ Long Term Need Passport Client: TCS
## 82 W2 Contract: 10+ MonthsÂ
## 83 What will a Principal Natural Language Process Data Scientist be responsible for? Lead the development, deployment and application of sophisticated deep learning techniques Design and implement innovative data science solutions to enable development of high-performance, product-ready code Articulate complex client challenge questions through consultations, team meetings, case framework analysis and other strategies Conduct objective research on project situation specific demands, and other related topics to support critical analysis. Develop models and data-driven solutions to add material insight to their client's understanding of their business and their business environment
## 84 Kingfisher Systems, Inc. specializes in providing a full range of Information Technology, Cybersecurity, Intelligence, and support services to the U.S. Government. Kingfisher Systems core competency is technology-enabled services, with a specific focus on national security. Since 2005, Kingfisher has established itself as a recognized and trusted mission partner whose mission is safeguarding sensitive information, operations, and programs for our Federal customers and warfighters.\n\nResponsibilities:\n\nKingfisher Systems, Inc. is seeking a Data Scientist to implement effective and innovative solutions meshing disparate data types to discover insights and trends in human capital information. Perform work on a data science team to advance HCA across the IC. Centralize IC data collection and synthesis, including survey data, enabling strategic and predictive analytics to guide business decisions. Develop or utilize complex programmatic and quantitative methods to find patterns and relationships in data sets; lead statistical modeling, or other data-driven problem-solving analysis to address novel or abstract business operation questions; and incorporate insights and findings into a range of products.\n\nCreate detailed write-ups of processes used, logic applied, and methodologies used for creation, validation, analysis, and visualizations. Write ups shall occur initially, within a week of when process is created, and updated in writing when changes occur.\n\nAdapt freeware solutions such as R, and modular solutions for maximum flexibility. Provide an advanced capability for data analysis of human capital information from disparate data types, promoting visualization and storytelling to discover insights that help guide business decisions. Be forward leaning, exploring HCA solutions that benefit IC elements individually and collectively. Let the data tell the story including what is inconsistent or contradictory and automate data cleansing. Integrate data solutions and collection with IC Information Technology Enterprise to enhance connectivity with\n\nIdentify and customize, if necessary, Commercial Off The Shelf (COTS) Information Technology (IT) tools for administration and data collection, or to leverage, support, and integrate the process with tools built for the IC and/or its component elements. Any tools developed shall be web-based and web-enabled, designed to maximize feedback effectiveness while protecting employee assessment information, and include both structured interview and web-based tools for gathering feedback.\n\nAssist in the development and maintenance of financial procedural guidance, templates and data formats, for identifying IC-wide workforce numbers, to include government, military, and contractor support. Review, assess data, identify data deficiencies and recommend solutions for remediation. Support quantitative and analytic efforts for workforce planning, development of financial procedural guidance data formats related to the Congressional Budget Justification Book (CBJB) planning displays, and analytic support for budget management and planning. Assist with ad hoc data requests from Congress, IC elements or as required by the client.\n\nCoordinate and facilitate community of practice and working group meetings, to include meeting logistics, agenda development and presenter coordination, day-of execution and facilitation, documentation and distribution of meeting minutes and presentations. Present or coordinate presentations on relevant topics as directed by the client.\n\nRequired Skills:\n\nData analysis and modeling on an independent (i.e., without significant guidance or direction) basis.\n\nDemonstrated experience in developing and applying quantitative methods to find patterns and relationships in large human capital data sets (such as the Human Capital Data Call, Foreign Language Data Call, FAIR act data, etc.) using statistical and graphical packages.\n\nApplied knowledge in the mathematical sciences such as probability, statistics, predictive modeling, computational social sciences; computer programming such as but not limited to Python, R, SAS, SPSS; and COTS data visualization packages such as but not limited to Tableau. Experience with Oracle Database and Procedural language/Structured Query Language (PUSQL).\n\nPreferred Experience:\n\nC#.NET, ASP.NET Model View Controller (MVC), jQuery, and Cascading Style Sheets (CSS), etc.\n\nStructuring data, natural language processing, database technologies, and machine learning algorithms.\n\nAbility to translate complex, technical, or analytic findings into an easily understood narrative- tell a story with the data in graphical, verbal, or written form.\n\nDemonstrated professional experience in developing quantitative financial analysis related to budget formulation and execution. Experience in development of financial procedural guidance data formats related to CBJB workforce displays and analytic support, resource management and planning.\n\nDemonstrated experience in human resource management supporting the federal government.\n\nDemonstrated experience in the use of programming, design, development, and implementation of on-line and automated survey instruments.\n\nYears of Experience: 10 years of relevant experience in a related field (International Relations, computer science, mathematics, statistics, political science etc.).\n\nDegree Requirement: Bachelor's or advanced degree in social sciences or related fields Related fields may include statistics, mathematics, computer science, physical science, economics, or engineering.\n\nMinimum Clearance Requirement: Minimum Top Secret clearance with SCI Eligibility. Applicants selected will be subject to an additional security investigation and thus may have additional eligibility requirements to meet.\n\nKingfisher Systems, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, protected veteran status, among other things, or status as a qualified individual with a disability.
## 85 Description: Ascendum Solutions is looking for a Data Scientist (or a ninja), who can do data science and data engineering and data analytics (need to be an expert on at least one topic and familiar with all). Flexibility is important to this role as our work is dynamic. Â Tools such as Python (including Pandas, NumPy and Scikit-learn) and Tensorflow will be heavily leveraged. Â Â Deep learning will be applied to images and videos (Convolutional Neural Networks) and potentially to longitudinal data (Recurrent Neural Networks). |Â |Â Â
## 86 Required: Strong problem solving skills with an emphasis on product development. Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights from large data sets. Experience working with and creating data architectures. Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications. Excellent written and verbal communication skills for coordinating across teams. A drive to learn and master new technologies and techniques.
## 87 Posting Title\nData Scientist / Machine Learning Expert\n\n04-Feb-2020\n\nJob ID\n288341BR\n\nJob Description\nONE Global Discovery Chemistry Community working across 7 disease areas at the Novartis Institutes for BioMedical Research (NIBR) is seeking a highly talented and motivated Data Scientist to join our Global Discovery Chemistry Department in Cambridge, MA. The successful candidate will join an energizing and collaborative research organization, working alongside colleagues who are committed to improving human health through the discovery of transformative medicines.\n\nWe are seeking a unique data scientist with the skills, experience and passion to extract new knowledge and disruptive insights from the large and rich body of data collected by one of the worldâ\200\231s leading pharmaceutical companies. You will be a member of our global Computer-Aided Drug Discovery (CADD) group, an interdisciplinary team of expert molecular modelers, cheminformaticians, and data scientists. Teamed up with domain experts from biology, chemistry and translational medicine, this is a unique opportunity to develop and apply cutting-edge machine learning technologies to uncover insights to real-world drug discovery problems and innovate paths to new medicines.\n\nYour responsibilities include:\nâ\200¢Develop and implement methods for extracting patterns and correlations from both internal and external data sources using machine learning toolkits\nâ\200¢Develop workflows for conducting comparative analysis among Novartisâ\200\231 diverse data sources as well as generalizing approaches developed in-house or externally.\nâ\200¢Enable open-source solutions for internal use and implement cutting-edge published scientific methods.\nâ\200¢Develop customized machine learning solutions including data querying and knowledge extraction.\nâ\200¢Interact and be part of interdisciplinary project teams to drive effective decision-making by mining and developing predictive models\nâ\200¢Develop new skills in the area of cheminformatics and drug discovery and leverage those to accelerate development of new machine learning algorithms\nâ\200¢Keep ahead of scientific literature and interact with internal and external scientists to integrate novel data science technologies\n\nMinimum requirements\nEducation:\n\nAdvanced degree (M.Sc. or higher) in data science and machine learning, statistics, computer sciences, cheminformatics, mathematics, computational chemistry, computational biology, bioinformatics, or related field.\n\nMinimum experience & skills:\n\nâ\200¢In-depth experience with modern and classical machine learning methods\nâ\200¢Strong statistical foundation with broad knowledge of supervised and unsupervised techniques\nâ\200¢Programming experience (preferred Python, R, C++) preferably in Linux and high-performance computing environments\nâ\200¢Good listener - strong, concise, and consistent written and oral communication\nâ\200¢Talent for communicating stories through data visualizations\nâ\200¢Proven ability to collaborate with others\nâ\200¢A passion for tackling challenging problems and developing creative solutions\nâ\200¢A drive for self-development with a focus on scientific know-how\n\nAdditional qualifications that will help in the role:\nâ\200¢Demonstrated impact using machine learning libraries, such as scikit-learn, PyTorch or similar in a cheminformatics context\nâ\200¢Hands on experience with data analysis software such as Spotfire, R-shiny or similar\nâ\200¢Working experience with open-source cheminformatics toolkits such as RDKit\nâ\200¢Working experience with source-code management systems such as Git/github/bitbucket\nâ\200¢Familiar with the foundational concepts in molecular biology, pharmacology or medicine. Working knowledge of medicinal chemistry and drug discovery is a plus\n\nWhy consider Novartis?\n\n750 million. Thatâ\200\231s how many lives our products touch. And while weâ\200\231re proud of that fact, in this world of digital and technological transformation, we must also ask ourselves this: how can we continue to improve and extend even more peopleâ\200\231s lives?\n\nWe believe the answers are found when curious, courageous and collaborative people like you are brought together in an inspiring environment. Where youâ\200\231re given opportunities to explore the power of digital and data. Where youâ\200\231re empowered to risk failure by taking smart risks, and where youâ\200\231re surrounded by people who share your determination to tackle the worldâ\200\231s toughest medical challenges.\n\nWe are Novartis. Join us and help us reimagine medicine.\n\nJob Type\nFull Time\n\nCountry\nUSA\n\nWork Location\nCambridge, MA\n\nFunctional Area\nResearch & Development\n\nDivision\nNIBR\n\nBusiness Unit\nGlobal Discovery Chemistry\n\nEmployment Type\nRegular\n\nCompany/Legal Entity\nNIBRI\n\nEEO Statement\nThe Novartis Group of Companies are Equal Opportunity Employers and take pride in maintaining a diverse environment. We do not discriminate in recruitment, hiring, training, promotion or any other employment practices for reasons of race, color, religion, gender, national origin, age, sexual orientation, marital or veteran status, disability, or any other legally protected status.\n\nShift Work\nNo
## 88 Job Title: Data Scientist. Location:Multiple Locations(GA,FL,TX,CA,VA,NJ,PA,DC) Duration: 12-30 Months Key Requirements and Technology Experience: Software engineer. Full stack- Front and back. Big Data experience. Scala/ Spark and Client exp would be good to have and Good mix of latest tech in Big Data. Data engineering background. Â Production experience with problems solving and real time experience. Have good distributed system. Hands-on â\200“ Nosql exp which will include Kafka experience. Nice to have-Google cloud plus
## 89 Read what people are saying about working here. \n\nThe Data Science & Analytics practice group at Capgemini is expanding its footprintâ\200¦rapidly. As part of the fastest growing digital practice within Capgemini, we work with the latest advanced analytics, machine learning, and big data technologies to extract meaning and value from data in a number of different industries ranging from Media & Entertainment to Life Sciences and everywhere in-between. Our team has worked with geospatial data, on social media sentiment analysis, built recommendation systems, created image classification algorithms, solved large-scale optimization problems, and harnessed the massive influx of data generated by the IoT.\n\nThe Data Science & Analytics group is the fastest growing digital practice at Capgemini demanding agile innovation. As part of the Data Science & Analytics group, you will work in a collaborative environment with internal and client resources to understand key business goals, build solutions, and present findings to client executives while solving real-world problems. If you are passionate about solving problems in the realm of cognitive computing, big data, and machine learning while utilizing business acumen, statistical understanding, and technical know-how, the Data Science & Analytics practice group at Capgemini is the best place to grow your career.\n\nRole & Responsibilities:\n\nDevelop analytics sub-practice within one of the following sectors: : Aerospace & Defense, Automotive, Banking, Consumer Products & Retail, Financial Services, Healthcare, High Tech, Industrial Products, Insurance, Life Sciences, Manufacturing, Public Sector, Telecom, Media & Entertainment, and Energy & Utilities.\n\nGenerate and execute the Data Science roadmap strategy for sector.\n\nDevelop internal industry solutions with management for given sector.\n\nProvide guidance on multiple engagements to ensure successful delivery while balancing internal initiatives.\n\nContribute to thought leadership and facilitate client relations across network of existing and potential clients.\n\nQuickly understand client needs, assemble teams, manage delivery, and articulate findings to client executives.\n\nProspect, generate, and deliver new business opportunities in given sector to meet revenue targets.\n\nAnalyze and model both structured and unstructured data from a number of distributed client and publicly available sources.\n\nPerform EDA and feature engineering to both inform the development of statistical models and generate improve model performance and flexibility.\n\nDesign and build scalable machine learning models to meet the needs of given client engagement.\n\nAssist with the mentorship and development of junior staff.\n\nAssist in growing data science practice by meeting business goals through client prospecting, responding to proposals, identifying and closing opportunities within identified client accounts.\n\nParticipate in client discussions, interact with CxOs at client organization to articulate the value of data science approaches, different service offerings and guide them on implementation of the same.\n\nCollaborate with client managers in a broad range of sectors to identify business use cases and develop solutions in driving impact through data science and analytics, communicate results, and inform practice group through reports and presentations.\n\nWork with Capgeminiâ\200\231s global data science leadership to execute identified business use cases on time and manage project delivery / client expectations.\n\nDevelop, enhance, and maintain client relations while ensuring client satisfaction.\n\nAbility to successfully deliver and manage multiple client engagements globally.\n\nRequirements:\n\n10+ years professional work experience as a data scientist or on advanced analytics / statistics projects with 5+ yearsâ\200\231 experience in one of the following sectors: : Aerospace & Defense, Automotive, Banking, Consumer Products & Retail, Financial Services, Healthcare, High Tech, Industrial Products, Insurance, Life Sciences, Manufacturing, Public Sector, Telecom, Media & Entertainment, and Energy & Utilities.\n\nMasterâ\200\231s degree from top tier college/university in Computer Science, Statistics, Economics, Physics, Engineering, Mathematics, or other closely related field.\n\nPhD preferred.\n\nStrong understanding and application of statistical methods and skills: distributions, experimental design, variance analysis, A/B testing, and regression.\n\nStatistical emphasis on data mining techniques, Bayesian Networks Inference, CHAID, CART, association rule, linear and non-linear regression, hierarchical mixed models/multi-level modeling, and ability to answer questions about underlying algorithms and processes.\n\nExperience with both Bayesian and frequentist methodologies.\n\nMastery of statistical software, scripting languages, and packages (e.g. R, Matlab, SAS, Python, Pearl, Scikit-learn, Caffe, SAP Predictive Analytics, KXEN, ect.).\n\nKnowledge of or experience working with database systems (e.g. SQL, NoSQL, MongoDB, Postgres, ect.)\n\nExperience working with big data distributed programming languages, and ecosystems (e.g. S3, EC2, Hadoop/MapReduce, Pig, Hive, Spark, SAP HANA, ect.)\n\nExpertise in machine learning algorithms and experience using the following ML techniques: Logistic Regression, Decision Trees, Random Forests, Gradient Boosting, SVMs, Time Series, KMeans, Clustering, NMF).\n\nPreferred experience with NLP, Graph Theory, Neural Networks (RNNs/CNNs), sentiment analysis, and Azure ML..\n\nExperience building scalable data pipelines and with data engineering/ feature engineering.\n\nPreferred experience with web-scrapping.\n\nExperience building and deploying predictive models.\n\nExpertise using PowerPoint and clearly articulating findings/ presenting solutions.\n\nExcellent team-oriented interpersonal skills and demonstrated leadership.\n\nTrack record delivering successful data science projects and managing global teams.\n\nDemonstrated leadership by building Data Science teams and fostering growth.\n\nProven success generating growth and hitting revenue targets.\n\nCandidates should be flexible / willing to work across this delivery landscape which includes and not limited to Agile Applications Development, Support and Deployment.\n\nApplicants for employment in the US must have valid work authorization that does not now and/or will not in the future require sponsorship of a visa for employment authorization in the US by Capgemini.\n\nQualifications\n\nApplications Consultants have expertise in a specific technology environment. They are responsible for software-specific design and realization, as well as testing, deployment and release management, or technical and functional application management of client-specific package based solutions (e.g. SAP, ORACLE). These roles also require functional and methodological capabilities in testing and training.\n\nRequired Skills and Experience:\n\nYou have a leading role in projects with strong client exposure. You are responsible for functional and technical guidance over the whole project or application lifecycle and you act as a client-facing lead developer. You drive sales opportunities within your area of responsibility. You know future developments in several applications and/or technologies and you are seen as subject matter expert within your unit and beyond.\n\nQualification: 12+ years experience, Bachelorâ\200\231s Degree.\n\nCertification: Should have SE level 2 and seeking level 3.\n\nShould be master in Package Configuration.\n\nMust have experience in Architecture Knowledge, Testing and Vendor Management.\n\nShould be proficient in Business Analysis, Business Knowledge & Technical Solution Design.\n\nCapgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.\n\nThis is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.\n\nClick the following link for more information on your rights as an Applicant - http://www.capgemini.com/resources/equal-employment-opportunity-is-the-law\n\nAbout Capgemini\n\nA global leader in consulting, technology services and digital transformation, Capgemini is at the forefront of innovation to address the entire breadth of clientsâ\200\231 opportunities in the evolving world of cloud, digital and platforms. Building on its strong 50-year heritage and deep industry-specific expertise, Capgemini enables organizations to realize their business ambitions through an array of services from strategy to operations. Capgemini is driven by the conviction that the business value of technology comes from and through people. It is a multicultural company of 200,000 team members in over 40 countries. The Group reported 2017 global revenues of EUR 12.8 billion (about $14.4 billion USD at 2017 average rate).\n\nVisit us at www.capgemini.com. People matter, results count.\n\n A global leader in consulting, technology services and digital transformation, Capgemini is at the forefront of innovation to address the...
## 90 Read what people are saying about working here. \n\nThe Data Science & Analytics practice group at Capgemini is expanding its footprintâ\200¦rapidly. As part of the fastest growing digital practice within Capgemini, we work with the latest advanced analytics, machine learning, and big data technologies to extract meaning and value from data in a number of different industries ranging from Media & Entertainment to Life Sciences and everywhere in-between. Our team has worked with geospatial data, on social media sentiment analysis, built recommendation systems, created image classification algorithms, solved large-scale optimization problems, and harnessed the massive influx of data generated by the IoT.\n\nThe Data Science & Analytics group is the fastest growing digital practice at Capgemini demanding agile innovation. As part of the Data Science & Analytics group, you will work in a collaborative environment with internal and client resources to understand key business goals, build solutions, and present findings to client executives while solving real-world problems. If you are passionate about solving problems in the realm of cognitive computing, big data, and machine learning while utilizing business acumen, statistical understanding, and technical know-how, the Data Science & Analytics practice group at Capgemini is the best place to grow your career.\n\nRole & Responsibilities:\n\n Work in collaborative environment with global teams to drive client engagements in a broad range of industries: Aerospace & Defense, Automotive, Banking, Consumer Products & Retail, Financial Services, Healthcare, High Tech, Industrial Products, Insurance, Life Sciences, Manufacturing, Public Sector, Telecom, Media & Entertainment, and Energy & Utilities.\n\n Quickly understand client needs, develop solutions, and articulate findings to client executives.\n\n Provide data-driven recommendations to clients by clearly articulating complex technical concepts through generation and delivery of presentations.\n\n Analyze and model both structured and unstructured data from a number of distributed client and publicly available sources.\n\n Perform EDA and feature engineering to both inform the development of statistical models and generate improve model performance and flexibility.\n\n Design and build scalable machine learning models to meet the needs of given client engagement.\n\n Assist with the mentorship and development of consultants.\n\n Assist in growing data science practice by meeting business goals through client prospecting, responding to proposals, identifying and closing opportunities within identified client accounts.\n\nRequirements:\n\n 3-5 years professional work experience as a data scientist or on advanced analytics / statistics projects.\n\n Masterâ\200\231s degree from top tier college/university in Computer Science, Statistics, Economics, Physics, Engineering, Mathematics, or other closely related field.\n\n PhD preferred.\n\n Strong understanding and application of statistical methods and skills: distributions, experimental design, variance analysis, A/B testing, and regression.\n\n Statistical emphasis on data mining techniques, Bayesian Networks Inference, CHAID, CART, association rule, linear and non-linear regression, hierarchical mixed models/multi-level modeling, and ability to answer questions about underlying algorithms and processes.\n\n Experience with both Bayesian and frequentist methodologies.\n\n Mastery of statistical software, scripting languages, and packages (e.g. R, Matlab, SAS, Python, Pearl, Scikit-learn, Caffe, SAP Predictive Analytics, KXEN, ect.).\n\n Knowledge of or experience working with database systems (e.g. SQL, NoSQL, MongoDB, Postgres, ect.)\n\n Experience working with big data distributed programming languages, and ecosystems (e.g. S3, EC2, Hadoop/MapReduce, Pig, Hive, Spark, SAP HANA ect.)\n\n Expertise in machine learning algorithms and experience using the following ML techniques: Logistic Regression, Decision Trees, Random Forests, Gradient Boosting, SVMs, Time Series, KMeans, Clustering, NMF).\n\n Preferred experience with NLP, Graph Theory, Neural Networks (RNNs/CNNs), sentiment analysis and Azure ML.\n\n Experience building scalable data pipelines and with data engineering/ feature engineering.\n\n Preferred experience with web-scrapping.\n\n Experience building and deploying predictive models.\n\n Experience with PowerPoint and ability to clearly articulate findings and present solutions.\n\n Excellent team-oriented and interpersonal skills.\n\nCandidates should be flexible / willing to work across this delivery landscape which includes and not limited to Agile Applications Development, Support and Deployment.\n\nApplicants for employment in the US must have valid work authorization that does not now and/or will not in the future require sponsorship of a visa for employment authorization in the US by Capgemini.\n\nQualifications\n\nResponsible for programming and software development using various programming languages and related tools and frameworks, reviewing code written by other programmers, requirement gathering, bug fixing, testing, documenting and implementing software systems. Experienced programmers are also responsible for interpreting architecture and design, code reviews, mentoring, guiding and monitoring programmers, ensuring adherence to programming and documentation policies, software development, testing and release.\n\nRequired Skills and Experience:\n\nWrite software programs using specific programming languages/platforms such as Java or MS .NET, and related tools, platform and environment. Write, update, and maintain computer programs or software packages to handle specific jobs, such as tracking inventory, storing or retrieving data, or controlling other equipment. Consult with managerial, engineering, and technical personnel to clarify program intent, identify problems, and suggest changes. Perform or direct revision, repair, or expansion of existing programs to increase operating efficiency or adapt to new requirements. Write, analyze, review, and rewrite programs, using workflow chart and diagram, and applying knowledge of computer capabilities, subject matter, and symbolic logic. Write or contribute to instructions or manuals to guide end users. Correct errors by making appropriate changes and then rechecking the program to ensure that the desired results are produced. Conduct trial runs of programs and software applications to be sure they will produce the desired information and that the instructions are correct. Compile and write documentation of program development and subsequent revisions, inserting comments in the coded instructions so others can understand the program. Investigate whether networks, workstations, the central processing unit of the system, and/or peripheral equipment are responding to a program's instructions. Prepare detailed workflow charts and diagrams that describe input, output, and logical operation, and convert them into a series of instructions coded in a computer language. Perform systems analysis and programming tasks to maintain and control the use of computer systems software as a systems programmer. Consult with and assist computer operators or system analysts to define and resolve problems in running computer programs. Perform unit testing Assist in system and user testing Fix errors and bugs that are identified in the course of testing.\n\nQualifications: 3 â\200“ 7 years (2 years min relevant experience in the role) experience; Bachelorâ\200\231s degree\n\nShould be proficient in Software Engineering Techniques, Software Engineering Architecture, Software Engineering Lifecycle and Data Management.\n\nShould have progressing skills on Business Analysis, Business Knowledge, Software Engineering Leadership, Architecture Knowledge and Technical Solution Design.\n\nCapgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.\n\nThis is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.\n\nClick the following link for more information on your rights as an Applicant - http://www.capgemini.com/resources/equal-employment-opportunity-is-the-law\n\nAbout Capgemini\n\nA global leader in consulting, technology services and digital transformation, Capgemini is at the forefront of innovation to address the entire breadth of clientsâ\200\231 opportunities in the evolving world of cloud, digital and platforms. Building on its strong 50-year heritage and deep industry-specific expertise, Capgemini enables organizations to realize their business ambitions through an array of services from strategy to operations. Capgemini is driven by the conviction that the business value of technology comes from and through people. It is a multicultural company of 200,000 team members in over 40 countries. The Group reported 2017 global revenues of EUR 12.8 billion (about $14.4 billion USD at 2017 average rate).\n\nVisit us at www.capgemini.com. People matter, results count.\n\n A global leader in consulting, technology services and digital transformation, Capgemini is at the forefront of innovation to address the...
## 91 Read what people are saying about working here. \n\nRole & Responsibilities:\n\n Work in collaborative environment with global teams to drive client engagements in a broad range of industries: Aerospace & Defense, Automotive, Banking, Consumer Products & Retail, Financial Services, Healthcare, High Tech, Industrial Products, Insurance, Life Sciences, Manufacturing, Public Sector, Telecom, Media & Entertainment, and Energy & Utilities.\n\n Quickly understand client needs, develop solutions, and articulate findings to client executives.\n\n Provide data-driven recommendations to clients by clearly articulating complex technical concepts through generation and delivery of presentations.\n\n Analyze and model both structured and unstructured data from a number of distributed client and publicly available sources.\n\n Perform EDA and feature engineering to both inform the development of statistical models and generate improve model performance and flexibility.\n\n Design and build scalable machine learning models to meet the needs of given client engagement.\n\n Assist with the mentorship and development of junior staff.\n\n Assist in growing data science practice by meeting business goals through client prospecting, responding to proposals, identifying and closing opportunities within identified client accounts.\n\n Participate in client discussions, interact with CxOs at client organization to articulate the value of data science approaches, different service offerings and guide them on implementation of the same.\n\n Collaborate with client managers in a broad range of sectors to identify business use cases and develop solutions in driving impact through data science and analytics, communicate results, and inform practice group through reports and presentations.\n\n Work with Capgeminiâ\200\231s global data science leadership to execute identified business use cases on time and manage project delivery / client expectations.\n\n Develop, enhance, and maintain client relations while ensuring client satisfaction.\n\n Ability to successfully deliver and manage multiple client engagements globally.\n\nRequirements:\n\n 5-10 years professional work experience as a data scientist or on advanced analytics / statistics projects.\n\n Preferred sector focus with 3+ yearsâ\200\231 experience in one of the following industries: Aerospace & Defense, Automotive, Banking, Consumer Products & Retail, Financial Services, Healthcare, High Tech, Industrial Products, Insurance, Life Sciences, Manufacturing, Public Sector, Telecom, Media & Entertainment, and Energy & Utilities.\n\n Masterâ\200\231s degree from top tier college/university in Computer Science, Statistics, Economics, Physics, Engineering, Mathematics, or other closely related field.\n\n PhD preferred.\n\n Strong understanding and application of statistical methods and skills: distributions, experimental design, variance analysis, A/B testing, and regression.\n\n Statistical emphasis on data mining techniques, Bayesian Networks Inference, CHAID, CART, association rule, linear and non-linear regression, hierarchical mixed models/multi-level modeling, and ability to answer questions about underlying algorithms and processes.\n\n Experience with both Bayesian and frequentist methodologies.\n\n Mastery of statistical software, scripting languages, and packages (e.g. R, Matlab, SAS, Python, Pearl, Scikit-learn, Caffe, SAP Predictive Analytics, KXEN, ect.).\n\n Knowledge of or experience working with database systems (e.g. SQL, NoSQL, MongoDB, Postgres, ect.)\n\n Experience working with big data distributed programming languages, and ecosystems (e.g. S3, EC2, Hadoop/MapReduce, Pig, Hive, Spark, SAP HANA, ect.)\n\n Expertise in machine learning algorithms and experience using the following ML techniques: Logistic Regression, Decision Trees, Random Forests, Gradient Boosting, SVMs, Time Series, KMeans, Clustering, NMF).\n\n Preferred experience with NLP, Graph Theory, Neural Networks (RNNs/CNNs), sentiment analysis, and Azure ML..\n\n Experience building scalable data pipelines and with data engineering/ feature engineering.\n\n Preferred experience with web-scrapping.\n\n Experience building and deploying predictive models.\n\n Expertise using PowerPoint and clearly articulating findings/ presenting solutions.\n\n Excellent team-oriented interpersonal skills and demonstrated leadership.\n\n Proven track record delivering successful data science projects and working with global teams.\n\n Demonstrated leadership by building Data Science teams and fostering growth.\n\nCandidates should be flexible / willing to work across this delivery landscape which includes and not limited to Agile Applications Development, Support and Deployment.\n\nApplicants for employment in the US must have valid work authorization that does not now and/or will not in the future require sponsorship of a visa for employment authorization in the US by Capgemini.\n\nQualifications\n\nResponsible for programming and software development using various programming languages and related tools and frameworks, reviewing code written by other programmers, requirement gathering, bug fixing, testing, documenting and implementing software systems. Experienced programmers are also responsible for interpreting architecture and design, code reviews, mentoring, guiding and monitoring programmers, ensuring adherence to programming and documentation policies, software development, testing and release.\n\nRequired Skills and Experience:\n\nYou assign, coordinate, and review work and activities of programming personnel. Collaborate with computer manufacturers and other users to develop new programming methods. Supervise, train, mentor junior level programmers in programming and program coding. Represent team in project meetings. Work with business and functional analysts, and software & solution architects in ensuring that programs and systems function as intended Supervise, mentor and manage large teams of programmers in one or more projects. Represent project teams in project/program meetings or in meetings with sponsor.\n\nQualifications: 7 â\200“ 10 (3 years min relevant experience in the role) experience, Bachelorâ\200\231s Degree.\n\nMust have experience in Software Engineering Techniques, Software Engineering Architecture, Software Engineering Lifecycle and Data Management.\n\nShould be proficient in Business Analysis, Business Knowledge, Software Engineering Leadership, Architecture Knowledge and Technical Solution Design.\n\nCapgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.\n\nThis is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.\n\nClick the following link for more information on your rights as an Applicant - http://www.capgemini.com/resources/equal-employment-opportunity-is-the-law\n\nAbout Capgemini\n\nA global leader in consulting, technology services and digital transformation, Capgemini is at the forefront of innovation to address the entire breadth of clientsâ\200\231 opportunities in the evolving world of cloud, digital and platforms. Building on its strong 50-year heritage and deep industry-specific expertise, Capgemini enables organizations to realize their business ambitions through an array of services from strategy to operations. Capgemini is driven by the conviction that the business value of technology comes from and through people. It is a multicultural company of 200,000 team members in over 40 countries. The Group reported 2017 global revenues of EUR 12.8 billion (about $14.4 billion USD at 2017 average rate).\n\nVisit us at www.capgemini.com. People matter, results count.\n\n A global leader in consulting, technology services and digital transformation, Capgemini is at the forefront of innovation to address the...
## 92 Title : Data Scientist Location : Houston TX Duration : 12 months  Job Description:  3+ years of experience working with supervised and unsupervised machine learning  3+ years performing anomaly detection  3+ years programming in python  3+ years working with data visualization and ability to communicate findings  Experience in data wrangling  Statistics  Advance Splunk knowledge (implementing data science models within Splunk)  Notes :  Strong DS resource with background in Cyber Security and good experience in Machine learning (Didnâ\200\231t specified any Algorithm) .
## 93 Read what people are saying about working here. \n\nWhere good people build rewarding careers.\n\nThink that working in the insurance field canâ\200\231t be exciting, rewarding and challenging? Think again. Youâ\200\231ll help us reinvent protection and retirement to improve customersâ\200\231 lives. Weâ\200\231ll help you make an impact with our training and mentoring offerings. Here, youâ\200\231ll have the opportunity to expand and apply your skills in ways you never thought possible. And youâ\200\231ll have fun doing it. Join a company of individuals with hopes, plans and passions, all using and developing our talents for good, at work and in life.\n\nJob Description\n\nData Discovery and Decision Science (D3) is the research and analytics organization at Allstate. We are solving some of todayâ\200\231s most complicated analytics problems by incorporating techniques across many disciplines â\200“ including mathematics/statistics, computer programming, data engineering and ETL, software development, and high performance computing â\200“ with traditional business expertise to extract meaning from data and optimize business decisions. As a Sr. Data Scientist, Model Validation that means using your expertise in several of these disciplines to assure the quality of insights generated by predictive models and new machine learning techniques across a broad range of datasets and business challenges. You will be part of making the lives of our Allstate colleagues easier and more productive, generating new opportunities for profitable market share growth, and driving our mission to deliver perfect insurance solutions to our customers.\n\nWe are avid about learning and applying new predictive analytics solutions to get the most value from our massive data resources. As a Sr. Data Scientist, Model Validation in D3 you will get 10% of your time to innovate, learn, collaborate, and teach through work on your own initiatives. In partnership with technology and the business, we are incorporating analytics into every aspect of the enterprise. Developing ourselves and others is key to our success.\n\nJob Summary\n\nIn this role, you will be part of an internal model validation team for the D3 department. You will use your experience in machine learning and predictive modeling to support continued evolution of a model validation framework and to direct peer review teams to validate and enhance critical models and insights for business decision-making. You will also share and spread best practices throughout Allstateâ\200\231s model community and be a liaison to risk management activities across the enterprise. The complexity of projects you will manage within this role will depend on your experience.\n\nKey Responsibilities\n\nDevelop and implement key elements of a flexible analytic validation framework, including application to emerging and advanced analytics (e.g., unstructured data, natural language processing, automation, AI)\n\nFacilitate and drive department peer review teams to execute analytic validation projects\n\nReview and evaluate appropriateness of analytic techniques and solutions, given current modeling practices and business requirements, and propose alternatives when necessary\n\nChampion the sharing and adoption of appropriate methodologies and best practices in an analytic environment\n\nCultivate cross-functional knowledge-sharing that drives analytic consistency and transparency across Allstate\n\nAdvise department teams on data and business problems to drive improved business results through designing, building, and partnering to implement models\n\nUtilize effective project planning techniques to break down moderately complex projects into tasks and including appropriate risk management while ensuring deadlines are kept\n\nCommunicate findings to team members, leadership and stakeholders to ensure risks and variations are well understood; recommend appropriate mitigations when necessary\n\nUse and learn a wide variety of tools and languages to achieve results (e.g., R, Python, Spark, SAS, Hadoop)\n\nIdentify approaches and tools that can bring efficiencies or needed techniques to the team\n\nCollaborate with the team in order to improve the effectiveness of business decisions through the use of data and machine learning/predictive modeling\n\nDevelop and execute a communication strategy, with appropriate coaching, that keeps all relevant stakeholders informed and provides an opportunity to influence the direction of the work\n\nMaximize personal professional development to ensure continuation of a personal contribution to the team and Allstate\n\nJob Qualifications\n\nAdvanced degree in a quantitative field such as statistics, mathematics, computer science, finance or economics, or equivalent experience\n\nProven knowledge of advanced statistical modeling and/or machine learning techniques, ability to quickly learn and understand strengths and limitations of new algorithms\n\nExperience working with modeling tools and techniques such as R, Python, Spark, GLM, Gradient Boosting (GBM/GBT), Natural Language Processing\n\nExperience working with statistical software such as R, SAS, SPSS, MatLab, CART, etc.\n\nUnderstanding of the insurance market place, economics and regulation preferred\n\nAbility to analyze and interpret moderate to complex concepts\n\nPrior model validation experience a plus\n\nAbility to provide written and oral interpretation of highly specialized terms and data and to present this data to audiences with various levels of expertise\n\nDemonstrated analytic agility\n\nAbility to organize, facilitate teamwork, inspire trust and build alignment with cross-functional projects\n\nAbility to develop and/or revise tactics to support changing business strategies and direction\n\nProven communication and influence skills taking varying audiences into consideration\n\nAdvanced degree in a quantitative field such as statistics, mathematics, computer science, finance or economics, or equivalent experience\n\nProven knowledge of advanced statistical modeling and/or machine learning techniques, ability to quickly learn and understand strengths and limitations of new algorithms\n\nExperience working with modeling tools and techniques such as R, Python, Spark, GLM, Gradient Boosting (GBM/GBT), Natural Language Processing\n\nExperience working with statistical software such as R, SAS, SPSS, MatLab, CART, etc.\n\nUnderstanding of the insurance market place, economics and regulation preferred\n\nAbility to analyze and interpret moderate to complex concepts\n\nPrior model validation experience a plus\n\nAbility to provide written and oral interpretation of highly specialized terms and data and to present this data to audiences with various levels of expertise\n\nDemonstrated analytic agility\n\nAbility to organize, facilitate teamwork, inspire trust and build alignment with cross-functional projects\n\nAbility to develop and/or revise tactics to support changing business strategies and direction\n\nProven communication and influence skills taking varying audiences into consideration\n\nThe candidate(s) offered this position will be required to submit to a background investigation, which includes a drug screen.\n\nGood Work. Good Life. Good Hands®.\n\nAs a Fortune 100 company and industry leader, we provide a competitive salary â\200“ but thatâ\200\231s just the beginning. Our Total Rewards package also offers benefits like tuition assistance, medical and dental insurance, as well as a robust pension and 401(k). Plus, youâ\200\231ll have access to a wide variety of programs to help you balance your work and personal life - including a generous paid time off policy.\n\nLearn more about life at Allstate. Connect with us on Twitter, Facebook, Instagram and LinkedIn or watch a video.\n\nAllstate generally does not sponsor individuals for employment-based visas for this position.\n\nEffective July 1, 2014, under Indiana House Enrolled Act (HEA) 1242, it is against public policy of the State of Indiana and a discriminatory practice for an employer to discriminate against a prospective employee on the basis of status as a veteran by refusing to employ an applicant on the basis that they are a veteran of the armed forces of the United States, a member of the Indiana National Guard or a member of a reserve component.\n\nFor jobs in San Francisco, please click "here" for information regarding the San Francisco Fair Chance Ordinance.\n\nFor jobs in Los Angeles, please click "here" for information regarding the Los Angeles Fair Chance Initiative for Hiring Ordinance.\n\nIt is the policy of Allstate to employ the best qualified individuals available for all jobs without regard to race, color, religion, sex, age, national origin, sexual orientation, gender identity/gender expression, disability, and citizenship status as a veteran with a disability or veteran of the Vietnam Era.\n\n Corporate Careers\n\nAllstate believes that loving your job is an important step in living a good life. Thatâ\200\231s why creating a unique and ...
## 94 What will a Principal Natural Language Process Data Scientist be responsible for? Lead the development, deployment and application of sophisticated deep learning techniques Design and implement innovative data science solutions to enable development of high-performance, product-ready code Articulate complex client challenge questions through consultations, team meetings, case framework analysis and other strategies Conduct objective research on project situation specific demands, and other related topics to support critical analysis. Develop models and data-driven solutions to add material insight to their client's understanding of their business and their business environment
## 95 Chef Software is the industry leader in IT automation and DevOps solutions. We develop the world's best products for managing applications and infrastructure at scale, and we deploy them against real problems in all kinds of industries. Weâ\200\231re writing the rules of the cloud -- rules the worldâ\200\231s top engineers live, breathe and contribute to. Our platform is used to enable hundreds of millions of people around the world to chat, fly, present, bank, game, shop, and learn. Chances are the web applications you use every day have infrastructure built, deployed, secured and ran with our code.\n\nWe are a dynamic and rapidly growing software company with a strong sense of dedication to our customers and the Chef community. We work hard but try not to take ourselves too seriously. This is a very collaborative and inclusive work environment. Individuals, strong on aptitude and attitude, will have an opportunity to grow their professional careers through working with some of the most advanced technology and talented developers in the business. We provide competitive compensation, generous benefits, and a professional yet relaxed atmosphere.\n\nWe are seeking a highly motivated, results oriented individual with strong data engineering skills and experience in cloud technologies to join our platform architecture team. This person will play a key role in developing advanced analytics products for Chef customers. They will also have a key influence on our future processes and architecture.\nWhat you'll do:\nTake ownership for designing, developing and maintaining scalable data pipelines and data models.\nDesign, construct, install, test and maintain data management systems.\nDevelop data ingestion and integrations processes.\nEnsure that all systems meet the business/company requirements as well as industry best practices.\nIntegrate up-and-coming data management and software engineering technologies into existing data structures.\nDevelop set processes for data mining, data modeling, data ingestion, and data production.\nContribute to the design of new product offerings based on data analytics.\nCreate custom software components and analytics applications.\nResearch new uses for existing data; design experiments and analysis to answer key business questions.\nEmploy an array of technologies, languages and tools to connect systems together.\nCollaborate with members of your team (eg, architects and engineers) on project goals.\nRecommend different ways to constantly improve data reliability and quality.\nWho you are:\nYou have a minimum of a Bachelorsâ\200\231 degree in Computer Science, Data Science, or related field, plus 5 years of experience (or equivalent combination of education and experience). Masters degree in a relevant field is advantageous.\nYou have demonstrable software development skills in at least one language such as Rust, Go, C#, C++, Ruby or Java.\nYou have expertise in architecting, designing and implementing data solutions in a cloud-native environment (AWS preferred).\nYou have experience with a variety of data technologies and structures that includes relational databases, NoSQL and graph.\nYouâ\200\231re well-versed in working with big data.\nYou have excellent analytical and problem-solving skills.\nYou like to dive in, learn new things, and want to build awesome products.\nYou have experience building and operating high-performance data systems.\nWorking experience with Containers and Container orchestration tools such as Docker and Kubernetes is a huge plus.\nYouâ\200\231ve had experience working with APIs (graph, specifically).\nYou enjoy collaborating closely with product management and internal engineering teams to understand their complex issues, solve their problems and elicit frequent feedback on the solutions you provide\nYou believe quality is part of the development process and not an afterthought\nOur platform architecture team is remote and distributed. Allowing us to not only live where we will be most productive, but enables us to create a work environment that celebrates all of our humanity. We celebrate the difference of perspective this brings and the barriers it removes.\n\nBenefits are awesome - a competitive salary, equity for all, solid medical/dental benefits, 401(k), telecommuting, flextime, a variety of interesting projects, and brilliant co-workers.\nAt Chef, we celebrate and support our differences. We know employing a team rich in diverse thoughts, experiences, and opinions allows our employees, our products and our community to flourish. Chef is honored to be an equal opportunity workplace. We are dedicated to equal employment opportunities regardless of race, color, ancestry, religion, sex, national orientation, age, citizenship, marital status, disability, gender identity, sexual orientation or Veteran status.
## 96 We are looking for Data Scientists who are interested in using data to draw insights that will result in policy changes or business process optimisation, benefiting the public. The applicant will be scoping projects with stakeholders, using data sets across Government Agencies, applying business acumen to tease out relevant impactful insights, and presenting insights in a clear, concise manner by using appropriate visualisations.\n\nHe/she should have some training and working experiences on data analytics, and should be comfortable with hands-on data manipulation, data modelling and data visualisation. He/she should also be comfortable with engaging stakeholders on sharpening their business problems.\n\nThe analytics work that we do are typically action oriented and cross-cutting across various domains such as social, economic and infrastructure sectors. Over time, he/she will gain exposure to various policy and ops domains and become more adept in bridging between business users and technical expertise.\n\nWhat to Expect:\nWork closely with stakeholders to understand their business challenges, scope the problem and develop business case on how to turn data into critical information and knowledge that are actionable and impactful,.\nPerform data cleaning, pre-processing, feature engineering and build relevant models to conduct meaningful analysis. Apply appropriate visualisation techniques to communicate the insight effectively. Iterate with the stakeholders to perform subsequent deep dives based on the initial insights.\nDepending on the use case, design of dashboards and interactive visualisations as tools for data exploration and storytelling may be expected.\nPotentially deployed to other Government Agencies to be their resident Data Scientist. This will involve formulating and implementing strategies to build strong pipeline of impactful projects at the Agency and executing these projects.\nHow to Succeed:\n\nBachelor Degree in Computer Science, Statistics, Economics, Quantitative Social Science, or related degrees. Advanced degrees preferred. We will also factor in relevant certifications (e.g., Coursera)\nMinimum 2 years of relevant working experience, preferably in public sector or data science field\nAbility to take a broad, strategic perspective as well as drill deep to understand business needs and challenges\nUnderstand key concepts, techniques and considerations in machine learning and data analytics\nTraining and relevant experience in one or more of the following areas:\nData science tools such as R, Python\nVisual analytics technologies like Tableau, Qlik\nExcellent communication skills, both oral and written, with ability to pitch ideas and influence stakeholders\nStrong analytical, conceptualisation and problem solving skills\nTeam player with strong organization and people handling skills\nPassion for the use of analytics and data to improve Public Service
## 97 Puget Sound Energy is looking to grow our community with like-minded, top talented individuals like you! With our rapidly growing, award winning energy efficiency programs, our pathway to an exciting and innovative future is now.\n\nPSE's IT Application Solutions team is looking for qualified candidates to fill an open Associate Data Scientist position!\nJob Description\nPuget Sound Energy is an electric and gas utility which provides homes and businesses throughout the Northwest. In order to meet and anticipate our customersâ\200\231 needs, our Data Services team is expanding its team to include machine learning and data science technologies. In order to accomplish this we have an urgent need for an experienced Associate Data Scientist to work in our analytical Community of Practice, document internal standards, and to support our functional business areas.\nJob Responsibilities\nIn addition to leading the development of our community, the Associate Data Scientist will be responsible for assisting with the design and development of an advanced analytical machine-learning-based software framework that can ingest, structure and model large amounts of data in different formats and from different sources, and is capable of learning, assisting and taking the initiative on various data modeling and analytical projects.\n\nThese efforts may include the development of:\nProcesses and data models that balance human judgment with machine intelligence\nMachine learning models to classify content in documents\nProprietary document parsing and information extraction tools\nData models and queries to organize and offer insights for users to help them understand\nTraining models to guide other Community of Practice colleagues\nData quality, cleansing and preparation of large structured, semi-structured or unstructured datasets for other groupsâ\200\231 use\nStandards and Best Practice development\nMinimum Qualifications\nBachelorâ\200\231s degree in Computer Science, Industrial Engineering, Business Administration or equivalent hands-on experience.\nGeneral IT experience: 6+ years IT experience, including several years of systems development experience in the design and development of department-wide or enterprise-wide applications.\nDetailed understanding and 5+ years programming languages (specialized technical requirements as shown in attachment) and strong knowledge of office applications (e.g. Microsoft Office).\nPossesses full technical knowledge of phases of software development life cycle.\nExperience with at least two full life cycle implementations of a new application or purchased application.\nExtensive knowledge of multiple languages, i.e (specialized technical requirements as shown in attachment) environment, active business processes, business areas, as well as industry.\nKnowledge of systems and/or business analysis design concepts. Demonstrated proficiency in the applied use of systems and process analysis tools. Full system life cycle experience, including development lifecycle methodologies.\nDemonstrated understanding of Structured Query Language (SQL).\nAbility to facilitate interactive design and functional specification sessions.\nHighly developed interpersonal, written and verbal skills with an ability to express complex technical concepts in business terms. Excellent technical writing and system documentation skills.\nStrong analytical problem-solving, and conceptual skills.\nDemonstrated project management skills for moderate sized projects.\nMust be a â\200œteam playerâ\200\235 able to work with management in developing and implementing new processes and enabling systems. Able to work with end user groups to define application needs and identify feasible solutions.\nProactive, self-motivated with the ability to motivate others and a customer-focused service attitude.\nGood organizational skills, with the ability to meet objectives and effectively multi-task.\nAbility to work with multiple levels of the organization, both technical and non-technical.\nDesired Qualifications\n3 Years of hands-on data science experience\nExperience using source control tools like Git or Subversion\nExperience with developing analytical models within and across multiple data domains\nProficient analytical and math background\nDemonstrated success in performing within a functional organizational environment\nInnovative mindset-thinks beyond the status quo\nKnowledge of OCR, Spark, Hadoop, H20 and/or cloud computing are preferred\nTools like workflow Sketch Agenda (SKs) sheets and examples of risk mitigation\n\nFamilies and businesses depend on PSE to provide the energy they need to pursue their dreams. Our steadfast commitment to serving Washington communities with safe, dependable and efficient energy started in 1873. Today we're building the Northwest's energy future through efforts like our award winning energy efficiency programs and our leadership in renewable energy.\n\nAt PSE we value and respect our employees and provide them opportunities to excel. We offer an expansive pay package that includes competitive compensation, annual goals-based incentive bonuses, comprehensive benefits, 401(K), a company paid retirement pension plan, and an employee assistance and wellness program.\n\nPuget Sound Energy is committed to providing equal employment opportunity to all qualified applicants. We do not discriminate on the basis of race, color, religion, sex, national origin, age, sexual orientation, gender identity, marital status, veteran status or presence of a disability that with or without reasonable accommodation does not prevent performance of the essential functions of the job. Should you have a disability that requires assistance and/or reasonable accommodation with the job application process, please contact the Human Resources Staffing department at jobs@pse.com or 425-462-3017.\n\nNearest Major Market: Seattle\nNearest Secondary Market: Bellevue
## 98 Experience building models, developing algorithms, managing structured and unstructured data. Experience with R/Python/C++/Java. Imagine being one of the "1st ten that started Google", living the startup life, wearing multiple hats and eventually creating an industry mogul. Now image joining Google today, with it being a powerhouse and internationally recognized firm. Now imagine yourself with a world-renowned Health firm who has realized the true significance of Data Science to a business and is a perfect blend of the above(stable yet new team). Deeming 2019 as "The Year of Data Science", they are looking for a candidate who can join the team with an eye towards building a business unit guiding the direction of a multi-billion dollar firm.
## 99 Data Scientist-Deep Learning. Please send resume to ravi@tekforcecorp.com with rate/hr and location.
## 100 Position: Data Scientist (8+ years of exp) Location: Downers Grove, IL Duration: 1 year contract + extension Phone / Skype interview to hire  Most important technical requirements: Experience with Python, R and Python libraries (Pandas, NumPy and SciPy) Data Visualization experience (Tableau â\200“ ideally, PowerBI or R) Machine Learning experience Experience with Hadoop, Spark Contact : divya@hyperiontechllc.com // 703-348-8374
##
## attr(,"class")
## [1] "DataframeSource" "SimpleSource" "Source"
v <- VCorpus(ds)
x <- Corpus(ds)
writeCorpus(v, path = "./Vcorpus_sub")
writeCorpus(x, path = "./corpus_sub")
Start Here if you are pulling from git
x <- Corpus(DirSource("./corpus_sub"), readerControl = list(language="lat"))
inspect(x[1:2])
## <<SimpleCorpus>>
## Metadata: corpus specific: 1, document level (indexed): 0
## Content: documents: 2
##
## 1.txt
## Description\n\nThe Senior Data Scientist is responsible for defining, building, and improving statistical models to improve business processes and outcomes in one or more healthcare domains such as Clinical, Enrollment, Claims, and Finance. As part of the broader analytics team, Data Scientist will gather and analyze data to solve and address complex business problems and evaluate scenarios to make predictions on future outcomes and work with the business to communicate and support decision-making. This position requires strong analytical skills and experience in analytic methods including multivariate regressions, hierarchical linear models, regression trees, clustering methods and other complex statistical techniques.\n\nDuties & Responsibilities:\n\nâ\200¢ Develops advanced statistical models to predict, quantify or forecast various operational and performance metrics in multiple healthcare domains\nâ\200¢ Investigates, recommends, and initiates acquisition of new data resources from internal and external sources\nâ\200¢ Works with multiple teams to support data collection, integration, and retention requirements based on business needs\nâ\200¢ Identifies critical and emerging technologies that will support and extend quantitative analytic capabilities\nâ\200¢ Collaborates with business subject matter experts to select relevant sources of information\nâ\200¢ Develops expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis and predictive modeling, graph theory, recommender systems, text analytics and validation\nâ\200¢ Develops expertise with Healthfirst datasets, data repositories, and data movement processes\nâ\200¢ Assists on projects/requests and may lead specific tasks within the project scope\nâ\200¢ Prepares and manipulates data for use in development of statistical models\nâ\200¢ Other duties as assigned\n\nMinimum Qualifications:\n\n-Bachelor's Degree\n\nPreferred Qualifications:\n\n- Masterâ\200\231s degree in Computer Science or Statistics\nFamiliarity with major cloud platforms such as AWS and Azure\nHealthcare Industry Experience\n\nMinimum Qualifications:\n\n-Bachelor's Degree\n\nPreferred Qualifications:\n\n- Masterâ\200\231s degree in Computer Science or Statistics\nFamiliarity with major cloud platforms such as AWS and Azure\nHealthcare Industry Experience\n\nWE ARE AN EQUAL OPPORTUNITY EMPLOYER. Applicants and employees are considered for positions and are evaluated without regard to mental or physical disability, race, color, religion, gender, national origin, age, genetic information, military or veteran status, sexual orientation, marital status or any other protected Federal, State/Province or Local status unrelated to the performance of the work involved.\n\nIf you have a disability under the Americans with Disability Act or a similar law, and want a reasonable accommodation to assist with your job search or application for employment, please contact us by sending an email to careers@Healthfirst.org or calling 212-519-1798 . In your email please include a description of the accommodation you are requesting and a description of the position for which you are applying. Only reasonable accommodation requests related to applying for a position within Healthfirst Management Services will be reviewed at the e-mail address and phone number supplied. Thank you for considering a career with Healthfirst Management Services.\nEEO Law Poster and Supplement\n\n]]>
## 10.txt
## The Senior Data Scientist will build and improve analytic pipelines to consolidate multiple data sources, perform analytic processing, and produce actionable business information (e.g., top-10 lists, pattern-matches, exceptions, or lists of #39;needles in the haystack#39;). S/he will apply hands-on development skills and experience as an expert in analytics, data science, pattern recognition, and
x <- tm_map(x, content_transformer(tolower))
x <- tm_map(x, removeWords, stopwords("english"))
inspect(x[1:2])
## <<SimpleCorpus>>
## Metadata: corpus specific: 1, document level (indexed): 0
## Content: documents: 2
##
## 1.txt
## description\n\n senior data scientist responsible defining, building, improving statistical models improve business processes outcomes one healthcare domains clinical, enrollment, claims, finance. part broader analytics team, data scientist will gather analyze data solve address complex business problems evaluate scenarios make predictions future outcomes work business communicate support decision-making. position requires strong analytical skills experience analytic methods including multivariate regressions, hierarchical linear models, regression trees, clustering methods complex statistical techniques.\n\nduties & responsibilities:\n\nâ\200¢ develops advanced statistical models predict, quantify forecast various operational performance metrics multiple healthcare domains\nâ\200¢ investigates, recommends, initiates acquisition new data resources internal external sources\nâ\200¢ works multiple teams support data collection, integration, retention requirements based business needs\nâ\200¢ identifies critical emerging technologies will support extend quantitative analytic capabilities\nâ\200¢ collaborates business subject matter experts select relevant sources information\nâ\200¢ develops expertise multiple machine learning algorithms data science techniques, exploratory data analysis predictive modeling, graph theory, recommender systems, text analytics validation\nâ\200¢ develops expertise healthfirst datasets, data repositories, data movement processes\nâ\200¢ assists projects/requests may lead specific tasks within project scope\nâ\200¢ prepares manipulates data use development statistical models\nâ\200¢ duties assigned\n\nminimum qualifications:\n\n-bachelor's degree\n\npreferred qualifications:\n\n- masterâ\200\231s degree computer science statistics\nfamiliarity major cloud platforms aws azure\nhealthcare industry experience\n\nminimum qualifications:\n\n-bachelor's degree\n\npreferred qualifications:\n\n- masterâ\200\231s degree computer science statistics\nfamiliarity major cloud platforms aws azure\nhealthcare industry experience\n\n equal opportunity employer. applicants employees considered positions evaluated without regard mental physical disability, race, color, religion, gender, national origin, age, genetic information, military veteran status, sexual orientation, marital status protected federal, state/province local status unrelated performance work involved.\n\n disability americans disability act similar law, want reasonable accommodation assist job search application employment, please contact us sending email careers@healthfirst.org calling 212-519-1798 . email please include description accommodation requesting description position applying. reasonable accommodation requests related applying position within healthfirst management services will reviewed e-mail address phone number supplied. thank considering career healthfirst management services.\neeo law poster supplement\n\n]]>
## 10.txt
## senior data scientist will build improve analytic pipelines consolidate multiple data sources, perform analytic processing, produce actionable business information (e.g., top-10 lists, pattern-matches, exceptions, lists #39;needles haystack#39;). s/ will apply hands- development skills experience expert analytics, data science, pattern recognition,
dtm <- DocumentTermMatrix(x)
inspect(dtm)
## <<DocumentTermMatrix (documents: 100, terms: 6530)>>
## Non-/sparse entries: 20117/632883
## Sparsity : 97%
## Maximal term length: 72
## Weighting : term frequency (tf)
## Sample :
## Terms
## Docs ability business data experience learning machine science team will
## 13.txt 5 17 25 14 0 0 1 7 6
## 16.txt 2 1 13 11 2 2 1 7 16
## 17.txt 2 1 13 11 2 2 1 7 16
## 2.txt 0 0 17 5 1 2 7 1 7
## 20.txt 0 12 36 22 2 2 8 14 10
## 25.txt 13 0 4 3 1 0 1 2 1
## 8.txt 0 0 17 5 1 2 7 1 7
## 89.txt 2 10 20 11 3 4 11 2 4
## 9.txt 2 1 13 11 2 2 1 7 16
## 90.txt 2 7 15 11 3 4 5 2 5
## Terms
## Docs work
## 13.txt 9
## 16.txt 8
## 17.txt 8
## 2.txt 2
## 20.txt 8
## 25.txt 5
## 8.txt 2
## 89.txt 6
## 9.txt 8
## 90.txt 6
Let’s find the words asociated to other words that we are interested in! It could be all kinds of useful!
findAssocs(dtm, "data", 0.40)
## $data
## management team experience tools
## 0.64 0.60 0.59 0.58
## work science, building analyze
## 0.54 0.54 0.54 0.53
## business reports part closely
## 0.53 0.53 0.52 0.52
## database providing science also
## 0.52 0.51 0.50 0.50
## working development needs responsibilities:
## 0.50 0.49 0.49 0.49
## design individual skills strong
## 0.49 0.49 0.48 0.48
## languages agile physical visualization
## 0.48 0.48 0.47 0.47
## demonstrated proactively bachelorâ\200\231s project
## 0.47 0.47 0.47 0.46
## services will key projects
## 0.46 0.46 0.46 0.46
## using statistics, utilize warehouse
## 0.46 0.46 0.46 0.46
## analytics including group developing
## 0.45 0.45 0.45 0.45
## individuals manage operations, opportunities
## 0.45 0.45 0.45 0.45
## written field services, next
## 0.45 0.45 0.45 0.45
## play clean internal various
## 0.45 0.45 0.44 0.44
## devops top team. identifying
## 0.44 0.44 0.44 0.44
## calculation opportunity support teams
## 0.44 0.43 0.43 0.43
## focus definitions, limited boston
## 0.43 0.43 0.43 0.43
## (.e. (physical (sql alteryx
## 0.43 0.43 0.43 0.43
## architect/data attribute audit, audits,
## 0.43 0.43 0.43 0.43
## bw4hana centric composite comprehension
## 0.43 0.43 0.43 0.43
## contributor creation. de-duplication) de-normalize,
## 0.43 0.43 0.43 0.43
## defects, diagrams, dictionary, discuss
## 0.43 0.43 0.43 0.43
## dsoâ\200\231s, e.g. ewd governance,
## 0.43 0.43 0.43 0.43
## governance. hana hana. hierarchies,
## 0.43 0.43 0.43 0.43
## loads logical), map, meta-data,
## 0.43 0.43 0.43 0.43
## metrics, modeler normalize ods
## 0.43 0.43 0.43 0.43
## owi privileges, promote purpose:
## 0.43 0.43 0.43 0.43
## resolutions reverse rules, sap
## 0.43 0.43 0.43 0.43
## schemas, script, sda, sdi
## 0.43 0.43 0.43 0.43
## security, setup, slt, slt.
## 0.43 0.43 0.43 0.43
## studio taxonomy, thoughtful, transactional
## 0.43 0.43 0.43 0.43
## user/role variants) vba, views,
## 0.43 0.43 0.43 0.43
## xml) â·activities: â·advanced â·assist
## 0.43 0.43 0.43 0.43
## â·bachelor â·basic â·demonstrate â·excellent
## 0.43 0.43 0.43 0.43
## â·experience â·experienced â·expert â·knowledge
## 0.43 0.43 0.43 0.43
## â·leadership â·minimum â·practical â·setting
## 0.43 0.43 0.43 0.43
## â·strong â·work â·working requirements
## 0.43 0.43 0.43 0.42
## applications environment unit across
## 0.42 0.42 0.42 0.42
## deliverables requirements: concise handling
## 0.42 0.42 0.42 0.42
## tasks, improve minimum performance
## 0.42 0.41 0.41 0.41
## provide built communication creating
## 0.41 0.41 0.41 0.41
## data, database, different etl
## 0.41 0.41 0.41 0.41
## issues stakeholders tests, dynamic
## 0.41 0.41 0.41 0.41
## platform. gender generation trends,
## 0.41 0.41 0.41 0.41
## analysis related role financial
## 0.40 0.40 0.40 0.40
## central master
## 0.40 0.40
First things first, lets grab a master list of coding languages.
url.data <- "https://raw.githubusercontent.com/jamhall/programming-languages-csv/master/languages.csv"
raw <- read.csv(url(url.data), header = TRUE,)
programming_list <- tolower(raw$name)
Now lets see which jobs match which roles
programming_list_dtm <- DocumentTermMatrix(x, list(dictionary = c(programming_list)))
programming_list_df <- as.data.frame(as.matrix(programming_list_dtm), stringsAsFactors=False)
Let’s first test a routine, we’re going to try and count the number of rows that exist for python.
sum(programming_list_df$python != 0, na.rm=TRUE)
## [1] 25
Now let make a new dataframe. First things first, let’s use this as a way to get skills and counts.
index_df <- data.frame(matrix(ncol = 2, nrow = 0))
colnames(index_df) <-c("Skill", "Count")
From there, let’s populate it
list_holder <- list()
for(i in 1:ncol(programming_list_df)) { # for-loop over columns
index_df[i,] <- c(colnames(programming_list_df)[i], (sum(programming_list_df[, i] != 0, na.rm=TRUE)))
}
index_df
list_holder <- list()
for(i in 1:ncol(programming_list_df)) { # for-loop over columns
index_df[i,] <- c(colnames(programming_list_df)[i], (sum(programming_list_df[, i] != 0, na.rm=TRUE)))
}
index_df
Now let’s remove everything that is useless.
finalizedProgramListShort <- index_df[index_df$Count !=0,]
And finally, lets view our data. Its a lot smaller!
finalizedProgramListShort
First things first, lets grab a master list of coding languages.
url.data <- "https://raw.githubusercontent.com/Amantux/Project_3/main/Data_Skills.csv"
raw <- read.csv(url(url.data), header= TRUE,)
head(raw)
soft_skills <- tolower(raw$ï..Skills) #Very confused at the column name, but I'm just rolling with it
Now lets see which jobs match which roles
soft_skills_dtm <- DocumentTermMatrix(x, list(dictionary = c(soft_skills)))
soft_skills
## [1] "algorithms"
## [2] "backtesting"
## [3] "business analysis"
## [4] "calculus"
## [5] "coding"
## [6] "data aggregation"
## [7] "data architecture"
## [8] "data cleaning"
## [9] "data exploration"
## [10] "data harmonization"
## [11] "data ingestion"
## [12] "data intuition"
## [13] "data management"
## [14] "data models"
## [15] "data wrangling"
## [16] "decision modeling"
## [17] "deep learning"
## [18] "distributed processing of large datasets"
## [19] "executive communication"
## [20] "extract"
## [21] "information security"
## [22] "linear algebra"
## [23] "machine learning"
## [24] "monitoring model performance"
## [25] "neural networks"
## [26] "prescriptive analytics"
## [27] "probability distributions"
## [28] "product development"
## [29] "random forests"
## [30] "requirements gathering"
## [31] "sampling methods"
## [32] "stakeholder management"
## [33] "statistical learning"
## [34] "statistics"
## [35] "stochastic models"
## [36] "testing"
## [37] "trend analysis"
## [38] "analytics"
## [39] "big data platforms"
## [40] "business intelligence"
## [41] "cloud platforms"
## [42] "cohort analysis"
## [43] "data analysis"
## [44] "data classification"
## [45] "data communication"
## [46] "data flows"
## [47] "data historians"
## [48] "data integration"
## [49] "data lineage"
## [50] "data mining"
## [51] "data visualization"
## [52] "databases"
## [53] "decision trees"
## [54] "deployment"
## [55] "ensemble methods"
## [56] "experiment design"
## [57] "forecasting"
## [58] "interdisciplinary learning"
## [59] "linear models"
## [60] "markov chain monte carlo"
## [61] "multivariate analysis"
## [62] "predictive analytics"
## [63] "presentations"
## [64] "problem framing"
## [65] "public speaking"
## [66] "regression analysis"
## [67] "sql"
## [68] "scripting"
## [69] "statistical hypothesis testing"
## [70] "statistical software"
## [71] "statistics platforms"
## [72] "structured thinking"
## [73] "transform & load (etl)"
## [74] "abstraction"
## [75] "aesthetics"
## [76] "attento to detail"
## [77] "charisma"
## [78] "competitiveness"
## [79] "communication"
## [80] "creative thinking"
## [81] "critical thinking"
## [82] "intuition"
## [83] "logic"
## [84] "presentation"
## [85] "reasoning"
## [86] "self direction"
## [87] "time management"
## [88] "troubleshooting"
## [89] "collaboration"
## [90] "constructive criticism"
## [91] "decision making"
## [92] "meeting deadlines"
## [93] "research"
## [94] "work quality"
inspect(soft_skills_dtm)
## <<DocumentTermMatrix (documents: 100, terms: 94)>>
## Non-/sparse entries: 201/9199
## Sparsity : 98%
## Maximal term length: 40
## Weighting : term frequency (tf)
## Sample :
## Terms
## Docs algorithms analytics communication databases deployment extract
## 13.txt 0 1 1 1 0 0
## 18.txt 0 2 2 0 0 0
## 2.txt 0 1 2 1 0 0
## 20.txt 4 20 2 0 0 0
## 25.txt 0 0 2 0 0 0
## 45.txt 1 3 0 0 0 0
## 8.txt 0 1 2 1 0 0
## 89.txt 2 6 0 0 1 1
## 90.txt 2 5 0 0 0 1
## 93.txt 2 5 3 0 0 1
## Terms
## Docs research sql statistics testing
## 13.txt 0 12 0 0
## 18.txt 2 0 1 0
## 2.txt 3 0 0 0
## 20.txt 6 0 0 2
## 25.txt 0 0 1 7
## 45.txt 3 0 3 0
## 8.txt 3 0 0 0
## 89.txt 0 0 1 2
## 90.txt 0 0 1 3
## 93.txt 1 0 0 0
soft_skills_dtm <- as.data.frame(as.matrix(soft_skills_dtm), stringsAsFactors=False)
Let’s first test a routine, we’re going to try and count the number of rows that exist for python.
sum(soft_skills_dtm$analytics != 0, na.rm=TRUE)
## [1] 27
Now let make a new dataframe. First things first, let’s use this as a way to get skills and counts.
index_df_soft <- data.frame(matrix(ncol = 2, nrow = 0))
colnames(index_df_soft) <-c("Skill", "Count")
From there, let’s populate it
list_holder <- list()
for(i in 1:ncol(soft_skills_dtm)) { # for-loop over columns
index_df_soft[i,] <- c(colnames(soft_skills_dtm)[i], (sum(soft_skills_dtm[, i] != 0, na.rm=TRUE)))
}
index_df_soft
list_holder <- list()
for(i in 1:ncol(soft_skills_dtm)) { # for-loop over columns
index_df_soft[i,] <- c(colnames(soft_skills_dtm)[i], (sum(soft_skills_dtm[, i] != 0, na.rm=TRUE)))
}
index_df_soft
Now let’s remove everything that is useless.
finalizedSoftListShort <- index_df_soft[index_df$Count !=0,]
And finally, lets view our data. Its a lot smaller!
finalizedSoftListShort