We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Advice & FAQs from Founders Factory data scientist Ali Kokaz.

Search data science online, and you will find an unending trove of technical tutorials and articles, ranging from how to ingest spreadsheet data, to building a multilayer perceptron for image recognition. However, data science is much more than simply building a complex algorithm: it’s also about empowering your business by creating a culture of data-driven decision-making

Indeed, as Hal Varian, Google’s chief economist, said back in 2009: “The ability to take data — to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it — that’s going to be a hugely important skill in the next decades.”

Today, speak to any business leader and nearly all will say that data science is a critical focus for their organization. Yet the reality is they’re struggling — recent research shows many firms are unfit for data, for a myriad of reasons including organizational capability, lack of talent, poor quality data and collection processes, to name a few.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

So what does it take to build a truly effective data science function?

From understanding what it means to be a “data-driven” organization, to conducting successful data science projects, I’ve compiled the guide below using 16 FAQs I often face when helping businesses work through their data challenges.

1. Why should data science be a priority?

As Tim Berners-Lee, inventor of the World Wide Web once said: “Data is a precious thing and will last longer than the systems themselves.”

In a nutshell, data science is the process and ability to turn raw data into information and insights to inform your business decisions. Without it, you are making decisions blind, or based on opinions and assumptions, rather than facts.

Data science can also be used to help identify opportunities, meaning you can find extra user growth, or revenue streams, by understanding your customers and markets more deeply. You can also use data science to help automate or reduce the overhead of certain processes, like evaluating and processing loan applications for a challenger bank, meaning you can cut costs and set the business up to scale.

This is largely the reason why companies are now pouring money into their data storage, analytics and science capabilities to improve operations and decision-making. It is no surprise that some of the biggest winners of the last decade were essentially data companies, like Google or Facebook, as well as less specialized examples like ASOS, who heavily optimize their shopping experience through data. Essentially, those that fail to invest in this area will quickly be left behind.

2. What are the foundations of a data-driven organization? 

“Without data you’re just another person with an opinion,” were the wise words of famous statistician W. Edwards Deming, which gets to the crux of what data-driven organizations are.

A data-driven organization is one that uses data to drive business decisions and processes, meaning they are informed when making choices, and decide things in a factual manner, rather than simply based on opinions and anecdotes.

For example, at my previous workplace — a leading data management consultancy — business decisions that needed to be made had to be backed up with data evidence, with projects prioritized based on data around how much impact they will have. That type of informed decision-making was pivotal, meaning we were so much more well-informed before undertaking work. 

Creating a data-driven organization requires two foundations:

  • A strong data culture — Unsurprisingly, the overriding foundation for a data-driven organization is a strong data culture across the company where employees make and justify decisions based on data. To do this successfully requires staff to have access to the relevant data (the right permissioning structures, access to golden sources of truth) tools (data engineering, BI, visualizations and insight sharing tools) and training necessary to unearth insight. 
  • “Golden Sources of Truth” — The other foundation is creating and maintaining golden sources of truth, which all values and figures get reported from. This is vital for ensuring consistency in results, which builds trust in the data being shown to stakeholders, and is the first key step in enabling data-driven decision-making.

A major factor underlying these foundations is consistent vocabulary, terminology and semantics across the organization, and stressed importance on why good data is vital for this to work — this is so that employees collect and store data properly rather than seeing it as another chore on their to-do list.

3. How can businesses align their data science function with high-level organizational goals?

This is pivotal to the success of a data department within any organization. There are a few steps I take within my department to ensure this happens:

  • Define critical business KPIs to target — When defining what’s important to the business, it’s vital to define how to measure/track progress on these goals through clear KPIs (think conversion metrics at a certain stage of the funnel, or monthly revenue).
  • Agree on what areas of the business data team should focus on Just like you define the scope for a project, it’s important to define what areas of the business/departments to focus attention on. This helps to stop the team from being stretched too thin and going in all directions. As the team grows in size and maturity, this scope can be expanded/altered accordingly.
  • Prioritize projects based on targeted KPIs Judge the proposed impact of a project based on the KPIs you agreed to improve with the business. This allows you to clearly focus on the projects and workstreams that give the best and most important return.
  • Create a roadmap with the business — Merge all of the above to help create a roadmap that’s agreed on with the business. Depending on how mature the objectives are, you could agree on actual projects or, more broadly, themes that will be tackled by the data team. Make sure these are regularly revisited and updated.

4. What does good look like? Measuring the success of your data science team

A fundamental part of building an effective DS team is to set out how you’re going to measure success. This is where critical business KPIs come into play! It’s always important to make sure you measure the success of the data team directly in relation to business goals. For example, this could be the number of customers gained through data science projects or time saved through automation.

You could also measure the interaction of the business with the data outputs as a measure of success. For instance, how many people are using the dashboards and reports the team has built? What decisions are being made off the back of them?

Typically, part of the project-definition process is defining success criteria. When these are hit, a project can be seen as achieving its targets; hence using these as KPIs can also be helpful.

5. “A good DS project is one that produces the best quality product in the least amount of time and continues to yield sustainable results.” Is this true?

In many aspects, this statement makes a lot of sense. However, a good data science project to me is one that produces the biggest impact on the business, in the shortest amount of time, and continues to drive business impact moving forward.

Working with various businesses, I’m always most concerned with the impact a project has, rather than the accuracy, quality or performance of the model in a project.

I’d also like to caveat that with the fact that fastest is not always best. Taking slightly longer with a project to future-proof or productionize more efficiently can pay off more in the longer term.

6. What questions should I ask before starting a successful DS project? 

As companies collect ever more data about their customers and their product usage behaviors, a rising challenge facing many businesses is how to analyze this data to derive useful insights.

Before undertaking any project, I always start with the questions below to inform planning and objectives:

  • Why are you doing the project — i.e., what value does the project bring and how does it contribute to the wider data science team and business goals?
  • Who are the main stakeholders of the project?
  • How will the project be used?
  • What are the success criteria for this project?
  • What is the current solution to the problem?
  • Is there a simple and effective solution to the problem that can be performed quickly?
  • Have you made an effort to involve the right people with enough notice and information?
  • How will you make sure that the project can be easily understood and handed over to someone else?
  • How will you deploy your solution?
  • How will you validate your work in production?
  • How will you gather feedback for the solution once implemented?

7. Businesses often include ever-changing teams and projects unfamiliar with data science. Why is it important to establish a shared data science vocabulary?

I cannot overstate the importance of this! When I work with startups, one of my first tasks is aligning on terminology, but it should be established for any team for the following reasons:

  • Develop understanding — Often this will be a two-way process, helping me to better understand the business, the terminology used within and how certain metrics are defined. On the flip side, it allows me to clarify and explain to businesses key data science terms and their importance, and educate founders and their teams on how to view and interpret them.
  • Help understand and measure key metrics — A common vocabulary is key to helping define metrics and KPIs more quickly and is vital in helping the business understand and appreciate the performance of the models built.
  • Enable transparency — A lot of companies and teams view data science as a
    black box” environment, so creating a shared vocabulary that everyone understands helps teams appreciate and understand how data science works, building up trust and credibility in the whole process.

8. Do you have a typical workflow you’d recommend for teams to use when approaching data science projects?

A well-defined workflow for data science applications is a great way to ensure that various teams in the organization remain in sync, which helps to avoid potential delays, financial loss, and especially projects going sideways without conclusive success or failure. 

There are several suggested workflows currently in circulation, with many building on existing frameworks in other data fields, such as data mining. While there’s no one-size-fits-all solution to all data science projects, often components depend on the company and team objectives. In my experience, there are certain steps that should be ubiquitous in all data science teams, accompanied by common approaches. These include:

  • Understand — Develop an understanding of the business problem or question, using this as an opportunity to gather requirements and define scope. Define and reach out to the stakeholders and SMEs that you need for this project.
  • Acquire — Most methodologies define this as the step to get hold of the data required.
  • Clean & explore data — This stage involves understanding what the data shows and its limits, along with cleaning the data and handling outliers, unclear business logic, etc. Typically, I’m heavily involved with the SMEs at this point, and often have to iterate between steps 1-3 for a while.
  • Model — This is where the actual analysis happens, which can be mathematical modeling, graphing analysis, ML model creation, etc.
  • Evaluate — How well do your models perform? Evaluation can take different forms depending on the business, ranging from ML model performance testing, to A/B testing uplift.
  • Deploy — Now that you’ve tested your analysis/model, place it into production such that it can be used by the business to drive decisions. This delivery can take different forms, the most common being an ML model API, dashboard, regular email, etc.
  • Debrief — As a team, communicate the results and impact, and disseminate what went well and didn’t. Use this as a teaching opportunity for members of the team who weren’t involved, and as a way to constantly fine-tune and improve processes.
  • Monitor — Build the required maintenance parts of the project. How do you update the model? How do you keep track of actions or outputs? How do you collect feedback from the business?

10. What are some of the ethical design challenges organizations face when building data products?

Data science and related fields of AI and machine learning are challenging assumptions upon which societies are built. The more data a business collects, the more powerful the organization is relative to the individuals. As a result, this presents a number of ethical challenges to be aware of when building data products, which include:

  • Correct data usage & privacy — This requires ensuring that data is not only fairly collected, but fairly used.
  • Interconnectedness of data — A good example of this is travel data, which not only discloses travel patterns but potentially housing and work locations.
  • Dynamic nature of data — Data evolves and accumulates over time, which means that data could in the future permit discoveries not currently allowed, or designed for.
  • Discriminatory bias — Models or products trained could inadvertently discriminate against a set or group of people, based on the data it is trained on.
  • Limited context — There may be a lack of space, time, and social context limitations on the scope of data. For instance, the data may describe and be used regardless of where, when and for what purpose it was initially collected for.
  • Decision transparency — this is linked to the discriminatory bias, but you should design a process where you can track why outcomes were made, and how the model makes its decisions.

For further reading, it’s worth checking out Google’s numerous blogs on fairness.

11. Is it ever permissible to collect personally-identifiable data about people?

This really depends on the use case, but the majority of the time, no. Data for insights is only useful in sensible aggregation, and not on a personal level. Usually, a middle ground is reached where some PII is collected that has been agreed is useful (such as address) but not all.

12. How should I manage the tradeoff between democratizing access to all data (for insights) and securing trust with customers by limiting access to their personal (sensitive) information?

First and foremost, you should securely store the sensitive data separately and limit access to this through correct permissioning and requesting. The remaining informative data can be open, with identifying data being anonymized (using a random user_id, for example). You could also impose transparency of what the data is being used for, ensuring data is only used for the reasons stated by stakeholders or the business.

Other things you can do include policies to limit accessibility, by setting minimum granularity on dashboards, for example. You can revisit these policies regularly as the business grows.

13. What considerations are important when scaling a data science function?

Scaling a data science team effectively is more than just hiring great people. In my experience, there are multiple areas and things you need to consider and maybe alter, including:

  • How to increase impact along with bandwidth — Some teams judge size as a measure of success, rather than impact to the business. A successful data science team build is one that can take on more projects while delivering deeper insights on each project. What will more people allow you to tackle? Are there any workstreams that can now be unlocked?
  • Having the right ability & skills mix within the team — As you scale, the distribution of skills required will change, such as how much engineering ability do you need vs. hard statistics? How do you structure the team? Reporting lines and management? Any skills you previously haven’t had in the team? How do you embed that?
  • Infrastructure & tooling — Do the tools that you use scale appropriately? Does your central codebase fit a larger team-working style? What collaboration tools do you bring in?
  • Working style & process — What processes do you introduce/remove? Do you change the structure of standups, retros, etc?
  • Maintaining team culture — As the old saying goes “people quit their boss, not their job.” How do you develop and maintain a culture across the team? How do you make sure that it doesn’t get imbalanced as you grow?
  • Efficient onboarding — First impressions matter. How do you bring in team members efficiently and effectively, such that it doesn’t hinder your existing team too much, but also gets the new team members impactful as quickly as possible?
  • Documentation — This is vital. How do you alter your documentation to ensure that the whole team has access and knowledge to what they need quickly? This is especially important when multiple projects happen quickly, so you can ensure no duplication of work and efficient sharing of ideas.
  • Appropriate data access, storage & permissioning — These really depend on your business, but some questions to think about include: Do you democratize data for everyone? Do you split people into data streams? Do your data storage solutions change?
  • Collaboration & cross working — Do you change the way the team works? Do you assign different project sizes? How do you ensure efficient collaboration?
  • Mentoring, development & knowledge sharing — A growing team needs to be a developing team. As teams grow, people become more specialized. How do you share knowledge across the team? How do you ensure junior members of staff are upskilled? And how do you train your more senior members? How do you allocate individual contributor paths and management paths?

14. When building a data science team, what are the most important skills and behavioral traits to consider?

When thinking about building a team, it’s vitally important to think about the overall skillset of the team, rather than simply what each team member brings individually. There are multiple methods and approaches you can use to define what the team needs to look like, but that’s a whole other guide! But what common skills/traits do I look for within any team member?

  • Passion & hunger to learn and improve — A good data scientist is continuously looking to improve, especially in an area where ideas and techniques develop rapidly.
  • Communication skills — Being able to communicate clearly within a team and to stakeholders is a core skill for any data scientist. Whether it’s to gather requirements well or to effectively explain the results and methodology of a project they’re working on.
  • Problem-solving mindset — Ultimately, data science teams solve business problems through data, therefore you require people on the team to have an innate ability to solve problems, by breaking down those problems into smaller chunks, clearly defining them, and assessing the different solutions to come up with the most efficient approach.
  • Adaptability — Problems change, teams change; it’s important to have an adaptable skillset and approach within the team, to flex the team along with the changing requirements of the business and the ever-evolving technology world.
  • Team working — An obvious skill, but you need your team to be able to communicate and work well with each other.

Some others to consider also include:

  • Programming skills
  • Statistics, maths & probability
  • Curiosity
  • Machine Learning
  • Entrepreneurial mindset
  • Data engineering
  • Data visualization
  • Analytical mindset
  • Critical thinking

15. When recruiting data scientists, how can I assess core competencies like organizational fit, technical depth, and communication skills?

Organizational fit

When working, especially in a smaller business, you will spend a large amount of time with that person, it’s important to try and understand whether that individual will fit in with the rest of the team, but also if they will enjoy working there. I usually do this in the form of two chats — one at the start of the recruiting process and one at the end.

The reason for splitting into two is I want to see how the candidate behaves around new people, and then how they perform in front of someone they are now more comfortable with. Does their attitude change? Now they are more comfortable at the end of the process, it’s a chance to see if they are naturally more introverted/extroverted. Does their professionalism change?

My questions also revolve around previous experience — how did they act with previous colleagues? What do they say about previous employers? What did they enjoy? What did they not enjoy?

I also use this as an opportunity to understand more about their aspirations — where do they want to be? What do they want to develop? What do they look for in a role?

For culture fit, I try to involve at least one other member from the team to see how they get on. An important point here is you need to find someone right for the team, an introvert in an extroverted team won’t work well and vice versa.

Technical depth

Typically, I’ll split this into two parts:

  1. Take-home task/case study — I set up a take-home technical exercise in the form of a mini-project. This will usually be a real-life question or problem we recently faced in the business, and always time-boxed, so they’d need to complete it within 4-6 hours.

Here, I’m looking at how they approach a problem, hence a time-limited exercise means they cannot create the most complex solution, so they will have to make decisions on what to simplify. How do they assess these trade-offs? How do they communicate them? Do they identify and communicate caveats? How do they link the problem to the business? Do they try to understand the impact of the outcomes?

If I need to drill further into technical ability, I use this as an opportunity to discuss what they would have done if they had more time. What do they know about a specific topic? How in-depth is their knowledge?

  1. Project deepdive –– For this, I ask the candidate to take me through a project they’ve worked on. How do they describe the problem? Do they try to describe the business impact? How clearly do they walk me through approach and findings? This should be in a skill/topic they are very comfortable with, so I can dig deep to understand how skilled they are.

Communication skills

I am assessing this throughout the whole interview process, especially through the take-home task stage. How do they present their work? What medium do they use? Do they cover all aspects of a project or a problem? Can they describe complex concepts clearly? In a non-technical way? Do they listen intently to my questions? Do they take time to think about an answer? Do they try to clarify questions?

I usually also reserve a few questions about how they got on with their teams and previous presentations and how did they build rapport with the business? How much contact did they have? Ask them to talk me through a good presentation they had.

Another aspect to pay close attention to is cues in their emails. How are they worded? Short? Long? Full of grammar/spelling mistakes? How formal?

16. As retaining data science talent becomes harder than ever in this competitive talent market, how can businesses help their data scientists navigate, grow and develop their careers?

This is a complex one, and will vary massively from one individual to the next, but managers still have a huge role to play in keeping staff happy. This is especially important in an area like data science, where employee churn is high, and roles are always available for superstar individuals. From my experience, there are a few areas I think about in terms of team retention:

  • Motivation — What motivates them? Money? Title? Interesting work? Work-life balance? These may be massive generalizations, and people usually want a mix of all four, but it’s about identifying those factors and knowing how to give it to your talent.
  • Development — Have an honest, regular chat with team members about their development. What do they want from their career? How can they get there? How can you help as a manager? Do they want to develop more in a particular coding area? How can they continually develop their skillset?

    Data science is a fast-moving field, and many data scientists feel “left behind” at work if not continuously developing and learning. Set aside regular time for the team to discuss and pursue development opportunities, it can be as simple as setting some time aside every Friday for members to pursue something extracurricular.
  • Training — As mentioned in the growth opportunities above, provide the right training and tools to take those opportunities. Are they weak at presentations? Spend some time with them, getting them to present to you or team members. A weakness on a specific topic? Get them a course, or a stronger team member to mentor.

    One critical thing I have experienced is that a lot of teams have training budgets to allow for courses but do not set aside time for the team members to train in those learned skills. Allow your team time to hone these skills, in addition to paying for attending courses.
  • Feedback — It’s important to give members constructive feedback so they can improve, but it’s about understanding how each person reacts to feedback and the best mechanism for them. Do they prefer a quick chat? Written feedback so they can digest it over time? A soft approach, or a firm style?

    Also, feedback is a two-way street. Allow your team to be able to give you feedback, too, so they can inform you how best to manage them and get the best out of them. The one point I never change, however, is where I give this feedback, it’s always in private, and it’s always constructive.
  • Praise & value — If someone has done well, shout about it! Let that team member know how well they have done, and make sure to do it in front of everyone. Make sure they are shown they are valued regularly. The frequency of how often you need to do this and the format depends from person to person, but you should do it regardless of the individual.
  • Build mutual trust — Make sure they can talk to you openly and honestly, and show them that you trust them. Give them honest advice, and allow them to see that you are there for them when they need advice.
  • Show growth opportunities — Make sure you give your team opportunities for growth and reward; don’t withhold promotions. Get them presenting in front of senior members, allow them to show independence, bring them to interviews, let them help define processes, give them management opportunities if that’s what they want to do.

Investing in a powerful data engine

As data science becomes an increasingly integral part of any business, navigating the evolving complexities of creating a powerful data engine has never been harder. Yet, shining a light on the common challenges faced by many firms shows that “good data science” requires a laser-sharp focus on fundamental data principles and ethics, and building a data-driven culture. Those businesses willing to invest the time and resources to become a truly “data-driven” organization will be positioning themselves for success in the years ahead. 

Ali Kokaz is a data scientist at Founders Factory.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Author
Topics