7 Pieces of Advice for Young S&C Coaches wanting to work in Football

I haven’t written a blog post for a long time, but have decided to make it more of a regular thing, to discuss some of the questions I get asked regularly.

I thought a good way to start these blogs would be to give, what I feel, is the 7 best bits of advice I have ever been given. I have been really fortunate to have the support of some excellent individuals from within the Sports Science / S&C profession in football, and that alone has helped me forge not only my own philosophy but some pointers that have stayed with me throughout my career. Here are some of those tips (in no particular order!).

1) Believe in your own ability to succeed


Football can be a high pressured environment, especially in S&C/Sports Science. Performance is everything, and lots of individuals will have opinions on how things should be done, and probably how you do things. If you don’t have the belief in your own ability, then it will be hard for anyone to believe in you. This goes for sending an email in hope of getting an internship or work experience to practically coaching a group of senior players. Have belief in yourself, and keep things real, you need to create opportunity for yourself in this game, learn all you can about coaching – do a level 1 coaching badge if need be – it will help you to become confident with groups!


2) Not all that glitters is gold.


If your sole aim is to work for a big 4 club and be the guy warming up the players during a Champion League game at the Nou Camp, then so be it (nothing like being ambitious!!). However, don’t expect to walk into a club with a bit of paper in your hand and find yourself working with players in the San Siro. Be prepared to put in everything you have to earn the right in the game. You will need to work long hours, and do many thankless tasks to earn the right for these golden moments and they are definitely to be savoured! In the beginning you will probably find yourself doing an awful lot of work for free, but what you reap what you sow. Professional football is a very small community, do the best job you can, and be willing to give your all, from making shakes to sweeping gym, whatever it takes – and who knows maybe one day you will make your BT Sport Champions League debut! 


3) Knowledge is power, the use of knowledge is powerful.


There is tonnes of emerging research into virtually everything in football, from the nutritional side to small sided games and sleep patterns. Applying most of this theory can be challenging and can often have a reverse effect if the application isn’t suitable for your players. Trying to put novice players through advanced techniques is suicide. Learn the basics, and apply them. By all means use the research to guide your decisions, but by trying to re-invent the wheel you may end up doing more harm than good, or simply just be wasting your time and the players time! Contact time can be low in the game, so bang for buck is the key! 


4) Be a sponge.


Get to conferences, get to know the people who are applying in the game, listen to podcasts, get yourself known and be a sponge. Absorb what you can from who you can. The internet and social media has opened up access to those within the game who are applying knowledge everyday. Read the books, push yourself to learn your trade everyday. Make it your craft, and be relentless in your own personal development. 


5) Enjoy the process, be realistic, and learn from your mistakes.


Working with top level players can be extremely rewarding, but it can involve long hours, and lots of situations can arise, in which people will require your expertise. Don’t be fooled by thinking that a few hours a day is all it takes and then you are home with your feet up watching The Chase with a cuppa. You might get the odd day like that, but those days can be rare. Manage your time well though as your health and well being is just as important as the players and staff you work with. Be realistic. Walking into a top level job, with little or no experience is highly unlikely, but if you put the hours in and get as much experience as possible, whether it be with your local club or as an intern, the process needs to be enjoyable. You will make mistakes, but thats the best way to learn. Keep pushing yourself to attain high standards and eventually you will get to where you want to be in the game!! 

6) We coach humans, not spreadsheets.

Our data drives the decisions we make, in some respects, and we collect the data to help us make informed decisions to assist the head coaches & management. However, we still need to be able to coach people, and that where I have seen others fall down or lose the players. This can be fatal. I personally love the research and academic side of sports science and its a pleasure to be involved in research projects, looking for areas where we can improve players through our own data collection and that of others in the research field – but we still have to be able to coach & communicate with our subjects. The best coaches understand this, they know the value of getting their messages across clearly & concisely and by avoiding jargon – players don’t want to be confused, they just want to perform.

7) Players are human too, take time to understand human nature.

Football is a highly pressured environment which so much going on at once it can be very intense to say the least. Players are human beings though, and they have the same wants, needs, desires, flaws, goals, ambitions, hopes, dreams as any other person you will meet. Not everyone in life will want to be open with you about their life or even like you, thats just life itself, but players are human too, do your best to understand what makes them tick (all individuals are different!), what their interests are away from football and so on. Their maybe a time in your career that you may have to have an awkward conversation with a player (maybe from your data collection!), but this can be made a lot easier when you have taken the time to understand the player – thats what the best coaches do, work with the people not against them.

Featured post

FIFA WWC: Why judge a path you haven’t walked on?

On Friday evening, before the Engand vs Argentina game, I saw this BBC interview with Carly Telford. Attending in her 3rd World Cup as part of the England squad, Friday night was her first World Cup start.

However, over the past ten days, I’ve also seen some negativity on women’s football via social media, most of which goes above and beyond acceptable. “It’s not the same game” one avid social media user tweeted me. The use of female commentators (when a male was co-commentating). The mockery of a South Korea player hitting the side netting from a corner. The memes of commentators with irons instead of microphones. Is that the society we actually live in? Or is it the work of a few individuals that would prefer the nonsensical validation achieved through social media likes? A recent twitter post by a rather vocal Dutch coach was crass and uncalled for. I’ve challenged a few of these keyboard warriors, but it seems to me that it falls on deaf ears.

Having worked in Women’s football, and throughly enjoyed every minute of it, I think the Carly Telford speaks volumes about the sacrifices made by the players to get to, and participate in a tournament, that we’ve all dreamed of playing in at some stage. The biggest footballing stage of all – the World Cup.

The majority of teams in the World Cup do not have full time professional players or leagues. With the exception of England (WSL 1), USA (NWSL), Germany and France, the majority of domestic leagues are part time. The huge dedication of the players (and staff in many cases) to commit to play for club and country opened my eyes. Most players have full time jobs, football doesn’t pay their bills. The current prize package for winning the WSL is nothing. Clubs are given money in forms of grants from the FA. However, this is set to change in the 2019/20 season with a new sponsorship deal with Barclays for the WSL, with a prize fund becoming available for distribution across the league.

The women’s game in England has been professionalised over the last couple of years and still has a long, long way to go. Even players classed as full time, may need additional income to live. Furthermore, in 2018, injury pay was reduced for women players in the WSL.

However, attendences in women’s football across Europe have increased, with over 60,000 fans attending a recent Atletico vs Barcelona Women’s game in Madrid. In England, 2.2 million people tuned in to watch the FA Cup Final at Wembley between Manchester City and West Ham – a small increase on the 2018 final between Arsenal and Chelsea. An incredible 6.1 million people viewed the recent England vs Scotland game on June 9th.

Thus the video sums a lot up for me, players who have given up so much to live their dreams. Sacrifices beyond belief. I feel incredibly lucky to have worked (Albeit on a very small scale!) with some of the players at this World Cup, I’ve seen what they have had to do, had to give up, and in some case struggle to represent their country on the biggest stage of all. These players have a desire to be successful, despite many of the challenges, socially and economically, that they may face daily. I’ve seen it first hand. It makes me proud to see these players achieve their dreams.

 

 

I’d urge anyone who feels they need to be negative about the women’s World Cup to ask themselves what they would say if their daughter or niece wants to be a professional footballer?

Just stop and think for one second. Would you deny someone the opportunity to achieve their dream through your own bias or lack of regard for women’s football?

I’m pretty sure you wouldn’t.

Advertisements

Player Monitoring & the Four Pillars of Confidence

The idea for this blog came from a presentation at the Kitman Labs Performance Summit in London (March, 2019) by my good friend, Dr. Robin Thorpe. Robin, a sports scientist, is an expert in recovery and regeneration physiology having spent nearly 10 years at Manchester United, before moving to Altis as Director of Performance and Innovation.

As Sport Scientists we collect data, we analyse it and we feedback to coaches. However simple that may seem, it can be easy to fall into the ‘Data Collection for Data’s Sake’ trap. If you don’t know why you are collecting it, then what are you collecting it for? I’m not ashamed to admit that I have previously fallen into that trap (yes, it can be trial and error!). In our support for players and coaches, it is important that the information fed back is accurate. It’s quite easy to make an assumption based on our data collection which may be inaccurate. In turn, this leads to incorrect inferences being made that may effectively reduce the training time and prescription for our players. We should never lose sight that our role should involve the use of scientific principles to improve and enhance our players, and not wrap them in cotton wool. We must look at maximising our players training and playing time. The greater the squad availability, the higher the probability of success (Carling et al., 2015).

In Dr. Thorpe’s presentation he spoke about the ‘Four Pillars of Confidence’ (Reliability, Validity, Sensitivity, Usability) when considering data metrics. A recent study by Starling and Lambert (2018) reported that of the 55 coaches and support staff the interviewed, 96% viewed monitoring of both training load and the training load response as important. However, of the coaches interviewed, it was noted that of the protocols used to monitor players, there was no single protocol that is cost-effective, time-efficient and non-invasive to players. While this may be an issue in some respects, a further issue may appear if the feedback to coaches is inaccurate and incorrect. Thus in the worst case scenario, causing us to lose training time.

In sport science, statistics are probably one of our biggest assets, and one of our most important aspects when making decisions based on data (Buchheit., 2016). However, if our statistical skills are less than proficient, we may end up making decisions that may be incorrect, or send confusing messages to our key stakeholders. This potentially gives practitioner, coach and player little confidence in the data, the data collection process and/or sport science.

Whatever we monitor we must ensure that the tools we use for monitoring are repeatable (Reliability), measure what they are supposed to measure (Validity), sensitive enough to detect meaningful change in the player data (Sensitivity), and subsequently useful (Usability) for the coaches and/or players (depending on who you are feeding back too!)

Therefore, the aim of this blog is to provide an overview of each of the ‘Four Pillars of Confidence’ suggested by Dr. Thorpe, and provide example statistical methods that may allow us to make inferences based on our data to support coaches as opposed to snapshot decisions based on small data.

rt.png

Reliability

Reliability may be considered as one of the most important of the Four Pillars as this directly affects the exactitude of our athlete monitoring (Atkinison and Nevill, 1988). For example, if we are to measure daily wellness in our players via a questionnaire, it is important to know what change on the scale would signify a meaningful change (McGuigan, 2017).

Whilst there are various methods to assess reliability, understanding the typical error of measurement; a method that directly measures the error within the test, subsequently allows us to calculate the variation in the monitoring tool. An effective type of typical error of measurement is the coefficient of variation (CV) which is expressed as a percentage. The CV gives us an indication of the spread of our data relative to the mean. Thus, the lower the CV, the lower the random noise, and therefore a higher chance of detecting a real change in the data (Hopkins, 2000). This gives us confidence that any changes in our data are reliable and not down to chance, and we are measuring what we say we are measuring! By calculating the CV, [100 (standard deviation / mean)] we can calculate the reliability of our monitoring tests, and in subsequently have confidence in our reporting of data to coaches and/or players.

This excellent video by Dr. Anthony Turner (Middx Uni) gives an excellent insight into assessing the reliability of your data.

Validity

The term ‘validity’ determines if the monitoring tool we use does what it says it does. Does the tool we use assess what we want it to assess? As with reliability there are various forms of validity (construct, ecological, face, content and criterion validity).

However, the types of validity of greatest importance to athlete monitoring, and for the purpose of this article, are construct and ecological (McGuigan., 2017). As a brief overview, construct validity refers to the extent of which a test measures what it was designed to measure (Baumgartner, 2007).

Ecological validity describes how the monitoring tool we select relates to the player’s performance and how well we can apply them in a real world scenario (McGuigan., 2017). However, it is possible to have a tool which has high reliability but little to no validity. Thus, when selecting our monitoring tools it is important that we have both high reliability and high validity. It is important that as practitioners we reduce the ‘noise’ in a test, and keep conditions as consistent as possible when administering any test or monitoring protocol.

Examples of test conditions that may affect the validity of our monitoring may be as simple as the number of observers, music, the preceding instructions on how to perform the test and the volume / frequency of verbal encouragement from testers / peers (Halperin et al., 2015). Thus, to quantify the validity of a test, your measurement (practical) values should be as close as possible to true values otherwise known as the “gold standard”. This is otherwise known as ‘criterion validity’. However, there are two parts to criterion validity: concurrent and predictive. (McGuigan., 2017).

For example, if we used a correlation between a performance test and a criterion measure, we could investigate the relationship between a laboratory based cycling time trial with a cycling competition time trial (Currell and Jeukendrup., 2008).  However, this while this may seem logical, it is far more difficult to replicate the complex demands of a sport such as football, in a performance test. Thus, an example of concurrent validity is high correlation between the Yo-Yo Intermittent Test with high-intensity running in football (Krustrup et al., 2003).

Therfore, Predictive validity is the ability of a performance protocol to predict performance. For example, Hawley and Noakes (1992) used a test of maximal oxygen uptake (V̇O2max) and peak power output (Wmax). The authors subsequently demonstrated that Wmax explained 94% of the variance in 20-km time-trial performance,  whilst VO2max described 82% of the variance.

Therefore, the nature of predictive validity, and its ability to deal with future performance, would have good application in areas such as fatigue monitoring (McGuigan., 2017).

Thus, in the context of our wellness data for this article, it was suggested by Thorpe et al., (2016) that the validity of the potential markers of fatigue (from our wellness data) can be assessed by examining their sensitivity to changes in prescribed training load over periods of time.

unnamedThorpe et al., (2016)

Sensitivity

When describing the sensitivity of a monitoring tool, we are referring to its ability to detect the small, but meaningful, changes in performance and/or in another aspect such as fatigue. Thus, sensitivity is related to both the reliability and validity of our monitoring protocol (McGuigan., 2017). For the applied practitioner, any valid marker of fatigue needs to be sensitive to fluctuations in training load (Meeusen et al., 2013). Consequently, and for the purpose of this part of the article, the focus will be on subjective well-being measures.

Recent literature by Thorpe et al., (2015, 2017) demonstrated that self-report measures, and in particular self-perceived measures of fatigue, were sensitive to daily and short-term training  load accumulation. Furthermore, a systematic review by Saw et al., (2016) suggested that subjective measures reflected changes in athlete wellbeing, thus appearing to be sensitive to changes in training load, both acute and chronic. However, there may be challenges in collecting data, and detecting change when using self reported subjective questionnaires (compliance, familiarity etc). Thus, simply taking a mean average of the team’s reported scores may not detect any meaningful change.

The mean is a measure of ‘central tendency’. If you are given a data and calculate the mean it will represent the centre or middle of that data set. The challenge with our monitoring, is that the mean value is influenced by outliers. The larger the outlier in the data, the bigger the change or pull on the data mean. However, using the mean is descriptive, and doesn’t really allow us to make inferences about our data.

In summary, if the total sum is what you are looking for then rather than the typical value, use the mean. For example, if you want to know who those with the highest total distance are, then calculating the mean would be a good idea. Remember, in this example you are only interested in those who are the top runners, so therefore those below the mean are somewhat irrelevant in your analysis. Thus, could you be missing vital information by only looking at a mean value?

If we take daily well being data as an example below we can see the mean average of the group along the top of the table:

z1

With each metric taken on a 1-5 scale (a total of 20) it appears that the average for each metric is as follows; Soreness 3.6/5, Energy 3.8/5, Stress 3.8/5 and Sleep 3.8/5. The mean average for the group is 14.96/20.  This looks ok, and doesn’t show any abnormalities of causes for concern in our athletes – Or does it? If we look closely, we can see some values (e.g., Player 8 and Player 19) are low. However, these mean average scores tells us there is no cause for concern. Thus, is this a true reflection of our data/athletes?

However, what we could do, is to express the data in a different way that is sensitive to these changes in the group of athletes wellbeing data. The example below is the same data, with the addition of a ‘z-score’.

z2

In essence, the z- score allows us to determine the number of standard deviations away from mean a data point is, i.e. how usual or unusual a certain data point is. As it is a standardized score, it allows us to make inferences based on our data (ie, positive or negative). Thus, provides more information than just the raw scores (Turner et al., 2015).

z3

We can calculate a simple z-score with the following equation:

Z-score = Players score (in this case total) – the group mean score / the standard deviation of the group.

When the raw data is converted to a z-score, the normal distribution of the scores will have a mean of 0 and standard deviation of 1, however the z-scores will range from +3 to -3. Thus, a z-score will allow the practitioner and coach to see how many standard deviations from the mean, either below (negative) or above (positive), a player’s scores are.

A further advantage of z-scores is that they can be easily charted and presented in graphs. Thus, allowing the practitioner to compare data, and/or modify a session or programme, or both, if necessary (McGuigan., 2017). Whilst practitioners can set their own thresholds to determine what is significant, it has been suggested that a threshold score of > 1.5 standard deviations (in this case, a negative score) may be effective in identifying risk (Coutts and Cormack., 2014).

z4
The table below gives an example of a monitoring system that can implemented at low-cost to the practitioner, using statistical analysis methods that allow us to make inferences on our data (Clubb and McGuigan., 2018).

Screenshot 2019-05-07 at 12.49.49

It is certainly worth noting, that while a z-score has been used for this article, it has been based on one day’s worth of data for demonstration purposes only. Although beyond the scope of his article, for further longitudinal analysis a modified z-score can be calculated from baseline data (e.g, preseason). The calculation is as follows (Clubb and McGuigan, 2018):

Modified z-score = (player score – baseline score) / standard deviation of baseline

Furthermore, this excellent free resource by Adam Sullivan will help you build a rolling 28 day z-score in Excel to create a daily wellness dashboard for your team.

Usability

Arguably the most important pillar for the applied practitioner  – how useful is this data for the coaching & playing staff? This goes back to the point at the start of this article – why collect data for data’s sake? What is important information, and what is not? The latter can be a difficult question for the applied practitioner to ask, but it’s vital we ask. Critical to our success as Sport Scientist’s is our ability to feedback to coaches and players, how we communicate our data with clarity and precision may prove challenging, and depend on those you are working with daily.

As we gain confidence in our data, using various statistical tools at our disposal, we must translate this information to inform practice (McCall et al., 2016). However, as sports scientist’s, having any kind of impact on the training programme and/or practice, is often far from easy (Buchheit. M, 2016). Personally, I believe this comes down to the fourth and final pillar – usability.

Currently,  no single marker within the literature allows us to become totally informed on an athletes wellbeing, and subsequently, no single test performed in isolation is capable of giving us the full picture of athlete wellbeing (Starling and Lambert., 2018). Thus, it is imperative that the data we collect is meaningful and usable for coaching staff.

During the fast paced daily environment of elite football, we must filter the data to ensure usability, and translate it for those whom require it most. The key decision makers in the applied environment may have many plates to spin (technical, tactical, business etc) on a daily basis. Thus, more often than not, they are more concerned with simple and concise answers to their questions, e.g. is this player available to train/play? (McCall et al., 2016).

As practitioners, it is our role to simplify the data for our key stakeholders (players, coaches, physios, medical staff). Thus, we must be able to report, with confidence, that the inferences made from our data our Reliable, Valid, Sensitive (to change) and Usable. Therefore, our ability to translate and communicate the data with practical meaning is absolutely paramount (McCall et al., 2016).

Below is a chart I have created when deciding what we should be look for within a monitoring tool to feedback to our key stakeholders.

z5

Adapted from: Starling and Lambert (2018) and Buchheit., M (2016)

Further recommended resources:

For free downloads and creating athlete monitoring tools:

Adam Sullivan

 

Excel Tricks for Sports

 

Statistics:

 

Sportsci.org

 

Special thanks for the help,support and guidance during the writing of this article:

Dr. Jamie Pugh, Postdoctoral Researcher, LJMU (J.Pugh@ljmu.ac.uk)

References

Atkinson, G. and Nevill, A. (1998). Statistical Methods For Assessing Measurement Error (Reliability) in Variables Relevant to Sports Medicine. Sports Medicine, 26(4), pp.217-238.

Baumgartner, T. (2007). Measurement for evaluation in physical education and exercise science. Boston: McGraw-Hill.

Carling, C., Le Gall, F., McCall, A., Nédélec, M. and Dupont, G. (2014). Squad management, injury and match performance in a professional soccer team over a championship-winning season. European Journal of Sport Science, 15(7), pp.573-582.

Clubb, J. and McGuigan, M. (2018). Developing Cost-Effective, Evidence-Based Load Monitoring Systems in Strength and Conditioning Practice. Strength and Conditioning Journal, 40(6), pp.75-81.

Coutts, A. and Cormack, S. (2014). High-Performance Training for Sports. Pp.85-96.

Currell, K. and Jeukendrup, A. (2008). Validity, Reliability and Sensitivity of Measures of Sporting Performance. Sports Medicine, 38(4), pp.297-316.

Halperin, I., Pyne, D. and Martin, D. (2015). Threats to Internal Validity in Exercise Science: A Review of Overlooked Confounding Variables. International Journal of Sports Physiology and Performance, 10(7), pp.823-829.

Hawley, J. and Noakes, T. (1992). Peak power output predicts maximal oxygen uptake and performance time in trained cyclists. European Journal of Applied Physiology and Occupational Physiology, 65(1), pp.79-83.

Krustrup, P., Mohr, M., Amstrup, T., Rysgaard, T., Johanson, J., Steensburg, A., Pedersen, P. and Bangsbo, J. (2003). The Yo-Yo Intermittent Recovery Test: Physiological Response, Reliability, and Validity. Medicine & Science in Sports & Exercise, 35(4), pp.697-705.

Martin Buchheit. (2019). Chasing the 0.2 | Martin Buchheit. [online] Available at: https://martin-buchheit.net/2016/05/16/chasing-the-0-2/ [Accessed 6 May 2019].

McCall, A., Davison, M., Carling, C., Buckthorpe, M., Coutts, A. and Dupont, G. (2016). Can off-field ‘brains’ provide a competitive advantage in professional football?. British Journal of Sports Medicine, 50(12), pp.710-712.

Meeusen, R., Duclos, M., Foster, C., Fry, A., Gleeson, M., Nieman, D., Raglin, J., Rietjens, G., Steinacker, J. and Urhausen, A. (2013). Prevention, diagnosis and treatment of the overtraining syndrome: Joint consensus statement of the European College of Sport Science (ECSS) and the American College of Sports Medicine (ACSM). European Journal of Sport Science, 13(1), pp.1-24.

McGuigan, M. (2017). Monitoring training and performance in athletes.

Starling, L. and Lambert, M. (2018). Monitoring Rugby Players for Fitness and Fatigue: What Do Coaches Want?. International Journal of Sports Physiology and Performance, 13(6), pp.777-782.

Turner, A., Brazier, J., Bishop, C., Chavda, S., Cree, J. and Read, P. (2015). Data Analysis for Strength and Conditioning Coaches. Strength and Conditioning Journal, 37(1), pp.76-83.

Wallace, L., Slattery, K., Impellizzeri, F. and Coutts, A. (2014). Establishing the Criterion Validity and Reliability of Common Methods for Quantifying Training Load. Journal of Strength and Conditioning Research, 28(8), pp.2330-2337.

 

Cost-free cultures.

Culture, especially in high performance sport, has become a buzz word. From books to symposiums, everybody wants to understand what makes the elite successful. There is nothing wrong with that whatsoever. It’s good to want to learn the why, but often context is bypassed. Suddenly, sheds are swept and a no d*&$head policy implemented. That’s great – if you are an All Black steeped in exceptional tradition and cultural values that span across generations of rugby.

However, in an industry predominantly based on our relationships with players and coaches, there are simple, daily practices that should become the norm as opposed to token gestures when building a culture. A strong culture must be built on taking care daily of what truly matters – people. Taking time to talk about family, opening doors, having a coffee with a colleague with no work related discussion. Personally, I think the more you understand about someone away from work the less astigmatic our relationships are.

In his brilliant book, The Culture Code, author Daniel Coyle identifies 3 skills at the heart of how humans function within successful teams:

  1. Building safety & a sense of belonging creates a comfortable working environment.
  2. Share vulnerability – no one needs to be perfect.
  3. Establish purpose through a common goal and define a clear pathway to get there.

Now, while opening a door, saying good morning to everybody, taking the time to learn about others may seem small token gestures, or assumed to be the norm, there is a neural hardwiring that gives us a sense of belonging. These small, and perhaps to some, trivial interactions are part of our hardwiring. We want to belong. We want to feel like we belong, and when it comes to belonging our brains either do or don’t feel that.

For example, a recent study by Roghanizad and Bohns (2017), found that you are 34 more likely to recieve a positive response in person than you are via email. Small interactions matter on a large scale! In a face-to-face situations our hardwired brains have more interactions do deal with. Are we safe? Do I belong? Are we sharing risk? Are we working towards the same common goal for the benefit of the team? Just these questions are impossible to answer via WhatsApp / text / email. We need to communicate in person. 

In other words, culture is not a set of traits — it’s a signaling contest. Improve your signals, improve your culture.

Daniel Coyle

Coyle uses the example of San Antonio Spurs coach Gregg Popovich as a master of building the belonging culture within his team. Popovich mastered the concept of using three belonging cues (“you are part of this group,” “this group is special; we have high standards,” and “I believe you can reach those standards”) to create his highly successful basketball team.

After defeat to Oklahoma City Thunder, Coyle observed something extraordinary at the next Spurs practice session when the legendary coach arrived:

“Popovich wasn’t yelling now. He was walking around, wearing a misshapen T-shirt from Jordan’s Snack Bar in Ellsworth, Maine, and shorts a couple sizes too big. His hair was spare and frizzy, and he was carrying a paper plate with fruit and a plastic fork, his face set in a lopsided grin. He looked less like a commanding general than a friendly uncle at a picnic. Then he set down his plate and began to move around the gym, talking to players. He touched them on the elbow, the shoulder, the arm. He chatted in several languages. (The Spurs include players from five countries.) He laughed. His eyes were bright, knowing, active.

When Popovich wanted to connect with a player, he moved in tight enough that their noses nearly touched. As warm-ups continued, he kept roving, connecting. A former player walked up, and Popovich beamed, his face lighting up in a toothy grin. They talked for five minutes, catching up on life, kids and teammates. “Love you, brother,” Popovich said as they parted”

Coyle continued that shortly after practice the Spurs team assembled in the video analysis room expecting to go over the defeat to their arch rivals, Oklahoma. But this wasn’t the case. Popovich showed the team a documentary about the 50th anniversary of the Voting Rights Act. Afterwards, the coach asked the team to discuss the film. His questions were intent on drawing a connection between the historical events and his individual playing staff. What did you think of it? What would you have done in that situation?

Gregg Popovich continually sent out the three belonging cues (“you are part of this group,” “this group is special; we have high standards,” and “I believe you can reach those standards”) to his players. These actions from the legendary coach draw parallels with a study by a team of psychologists. The researchers had middle-school teachers assign an essay-writing assignment to their students, after which students were given different types of teacher feedback. To their surprise, the researchers were amazed to discover there was one particular type of teacher feedback that improved student effort and performance so much that they deemed it “magical.”

What was the magical feedback?

Just 19 words:

“I’m giving you these comments because I have very high expectations and I know that you can reach them”

That’s it. Just 19 words. But they’re powerful because they are not really feedback. They’re a signal that creates something more powerful: a sense of belonging and connection.

Looking closer, the phrase contains several distinct signals that mirror those of Gregg Popovich;  You are part of this group. This group is special; we have higher standards here. I believe you can reach those standards.

Build a culture of trust, respect and hard work, but don’t losing the humanistic element of working with people. Ultimately, the small daily interactions we have amount to big things. We do not need to start at the top of the mountain to begin to build culture, instead we must focus on the small pebbles and continually build.

“Culture isn’t magic. It’s about tuning into a series of small moments that send powerful signals: You are safe. We share risk here. We are headed this direction.”

References

 

Ask in person: You’re less persuasive than you think over email (Roghanizad,M and Bohns, V, 2017)

The Culture Code: The Secrets of Highly Successful Groups (Coyle, 2018)

Soccerology Podcast #2 – Ross Bennett (Head of Academy Sports Science, QPR)

The second Soccerology podcast is now live!

In this episode we discuss:

– Ross’s roles with Chelsea FC, Aspire, QPR and London GAA.
– His philosophy in Sports Science and how he has developed through experience.
– GPS, RPE, Heart Rate and monitoring on a budget.
– Working with full-time and part-time athletes.
– The external challenges youth players may face during their development.
– Translating the science for youth development coaches.
– The challenges of part-time players and full-time workers.
– People first, athletes second.

 

Soccerology Podcast #1 featuring @Fergus_Connolly

Welcome to the first episode of the Soccerolgy Podcast with Dr. Fergus Connolly.

Dr. Fergus Connolly is a performance consultant who has worked with a variety of sports teams including Bolton Wanderers, Liverpool FC, San Fransico 49ers and the University of Michigan.

In this episode Fergus discusses:

– His recent experiences in sports science across multiple team sports

– Fergus’s books 59 Lessons and Game Changer.

– Advice for young sport scientist’s and coaches.

– Learning from Prof. Vitor Frade, Brendan Rodgers, Sam Allardyce, Bobby Robson, Ruud Van Nistelrooy, Charlie Francis and many others.

– Buy-in and the art and science of coaching.

– Individualization in team sports.

– The Policeman, The Drunk and The Priest…

To listen on Itunes Click here

BASES 2018 Presentation

A copy of my presentation, ‘The effect of sleep on high speed running during a weekly micro-cycle in elite female soccer players’,  at the BASES Student Conference, Newcastle (2018).

 

Blog at WordPress.com.

Up ↑