This post was originally published on LinkedIn on February 23, 2016
This is part 2/2 of our segment on social media screens. Last week, Ben Mones and I discussed the environment in which these social checks have proliferated. Now, we will reveal a few different ways to use this information responsibly.
One reason it is incredibly important we embrace the safety, rights and privacy of the individual when looking at social is because the data can be so subjective. The data found on social media represents patterns of behavior, that when strung together create a holistic online identity of a person. However, in some cases the executive may not see the full picture. Someone who tweets crude things, for example, might be a comedian and the content is part of an act. The information needs both context and balance.
Here are our recommendations for best practices on making social media data come alive, whether within a business or government context:
INVOLVE THE CANDIDATE. Let people know that their social media profiles will be searched. Let them know their public facing social media will be examined, with proper disclosure and authorization. Yes, that means no more cyberstalking, no more incognito searches – let’s be honest and upfront.
RESEARCH WITH COMPASSION. Don’t think of social media as pure grounds for termination, or a definitive reason not to hire someone. Think of it as additional, empirical data available about a person. Then that data must be used to start a conversation with the candidate in question. The individual will either have a reasonable explanation for what’s been found, or they won’t. Either way, people should always have the option to present contrasting evidence.
TAKE A STANDARDIZED APPROACH. Consistent practices are important in ensuring that everyone is examined under the same criteria, and that everyone has a fair chance regardless of race, age, creed, station, etc. Further, the point of using social signal data is to begin to understand consistencies in the patterns of behavior to make smarter, better and faster decisions. Ideally, companies will rely on this data less because they will start intuitively understanding what a “good” person looks like online. Two rapidly evolving technologies can support this intuition: artificial intelligence and machine learning.
GET EXECUTIVE BUY-IN. If an organization allows individuals to manually research patterns of behavior, then it will always be subject to the prejudices and proclivities of a single person. That’s problematic. So set a top-down approach where leaders agree on what to look for, how to act on it, and make that an internal, crystallized policy.
DON’T RELY ON SOCIAL DATA ALONE. There’s no silver bullet when it comes to defining a person. We’re the most magical and diverse creatures walking this earth, and we know it. Use social media data as a companion, not the ultimate arbiter of whom a person is.
RECOGNIZE THE LIMITS OF DATA ANALYSIS AND THE DEEP COMPLEXITY OF HUMAN BEHAVIOR. Even a perfect model for picking up hints of terrorism in past behavior won't eliminate all future risks, because humans change behavior as often as they change clothes. Even the best screen can't keep every mosquito from getting through. The best that can be hoped for is to bring risks down to acceptable and largely predictable levels.
FINALLY, COMPLIANCE. If you don’t know the laws, find someone who does. If you’re doing social checks, there are a variety of important laws in place to ensure all participating parties are protected.
Mike Edelhart is the managing partner of Social Starts, one of the most active moment-of-inception venture funds in the US. A pioneering media and Internet startup executive, Mike became widely known in tech circles as the original Executive Editor of PC Magazine.
Ben Mones is the CEO and Co-Founder of Fama Technologies Inc., which offers cloud-based software that utilizes the publicly available, online record to help businesses hire the right people, with a major focus on trust and safety.