2015-05-09, braindump (uncorrected) Here are some legitimate reasons persons and organizations may have to hide their identity, geolocation and/or behavior from certain others at certain times: * anyone - prevent, mitigate or redress abusive profiling-based decision-making in the public sector (govt) and private sector (data brokers, ISP, advertisers) * journalists - protect sources (for sake of both source and journalist) - hide current interests (prevent interference/countering) * whistleblowers - protect job - protect family - protect freedom of action - not be imprisoned - not be intimidated, threatened, harassed, abused * dissidents, activists - idem * battered wives, victims of sexual crime, persons fearing honor killings - protect body & mind from (further) injury - repair freedom of action - not be intimidated, threatened, harassed, abused * government - protect sources - prevent target of investigation (military, police, etc.) from knowing they are currently being investigated (sting ops, OSINT gathering) * companies - prevent competition from learning plans, budgets, interests and activities - learn plans, budgets, interests and activities of competition without competition knowing * parents: - prevent bad actors from learning about existence, location, schedules of children Depending on the adversary (objectives, capabilities, intent), protection may need to take into account internet/phone content and metadata, geolocation, and (other) emissions from software and hardware, in conjunction with physical measures (disguise, change address, change behavior, etc.). If a person has nothing to hide, it follows that they agree that we access their entire medical history (including psychological), financial records (salary, debts, inheritance), school and work performance, consumer behavior, web browsing behavior, text messages, phone and email communications in real-time, and sell it for our own benefit, or publish it online as RSS feeds. They also agree that we get a photo copy of their passport and credit card + PIN. And repeat the process for all of their loved ones, who they presumably will also believe have nothing to hide. Carefully consider whether there's anything in the following list that may at some context in the future, perhaps taken out of context, linked to other data, be used to unjustly disadvantage, discriminate, stigmatize, harass, intimidate, blackmail, extort or silence a person --- perhaps you, or someone you know (or can imagine) who is in a different, perhaps less fortunate position. Here is the list, which is a quote from Facebook's lawyers [0]: "People use Facebook to share information about themselves, much of it personal. This information often includes: * The person's age, religion, location, city of birth, educational affiliations, employment, family members, children, grandchildren, partner, friends, places visited, favorite music, favorite movies, favorite television shows, favorite books, favorite quotes, things 'Liked,' events to attend, affiliated Groups, fitness, sexual orientation, relationship status, political views; * The person's thoughts about: religion, sexual orientation, relationship status, political views, future aspirations, values, ethics, ideology, current events, fashion, friends, public figures, celebrity, lifestyle, celebrations, grief, frustrations, infidelity, social interactions, or intimate behavior; * The person's photographs and videos of: him- or herself, children/family, friends, third parties, ultrasounds, medical experiences, food, lifestyle, pets/animals, travel/vacations, celebrations, music, art, humor, entertainment; * The person's private hardships meant to be shared only with friends; and * The person's intimate diary entries, including reflections, criticisms, and stories about daily life." In various contexts, bad behavior can emerge from a government, big data co, the workplace, insurance companies, banks, but also among social relations. Not all will end up in such a context. Not all who do, will be confronted with bad behavior. And not all feel that the risk needs mitigation; perhaps because of failing to grab risk that is abstract: if we haven't seen it ourselves, don't know stories of anyone who did, then we perceive a low probability and uncertain impact. In addition, we may estimate that contexts in which the risk could manifest will take place in some undetermined point in the future, if at all. Yet for many real persons, the risk *has* already manifested and/or *will* manifest in the future. [0] https://www.eff.org/files/2014/06/26/fbopening_brief_in_re_381_search_warrants.pdf