• Auther image

  • Crowdsourcing Biological Specimen Identification

    |
    0 comments

     

    Joseph R. Carvalko

    Cara Morris

     

     

    Abstract

    Existing technology may provide a solution to the intractable problem of the unmet demand in rural areas for the identification of biological specimens.  Acquisition and transmission of the biological specimens, revealed through low cost microscopy, to a server would be viewed and analyzed by autonomous members of a crowdsource community, where subject to a system of quality control, populations that do not have access to medical diagnostic facilities might have the full advantages of medical analyses.  One such system is suggested where a health care worker may transmit biological specimens via a low cost microscope interfaced to a smartphone for transmitting the image to a cloud server, where the image can be analyzed by crowdsource volunteers having disparate expertise, collecting the opinions of the crowdsource volunteers and weighting them based upon qualifications (judged on the proximity of their identification to a majority peer identification), and forming a consensus biological classification or diagnosis that is transmitted to the health care worker.

     

    Keywords- Healthcare innovations, global healthcare, diagnosis, therapeutic, emergency care, interventional protocols, Point-Of-Care (POC) Technologies, Indigenous health,  consensus diagnosis, wireless communications, cloud computing, crowdsourced medicine, Critical Care

     

    Introduction

     

    The analysis of biological specimens for the diagnosis of disease in impoverished rural areas has proven intractable, mainly hampered by the cost of instruments and qualified staffing. However, an ultra-inexpensive microscope used with a smartphone has given rise to a proposed system, where a health care worker transmits the image of a specimen to a cloud server for examination by crowdsource volunteers, who, through consensus, form a diagnosis or specimen identification, which is communicated to the originator. [1]

     

    Technology that has linked people around the world for security, social and economic reasons, has yet to reach its potential for providing medical diagnosis for thousands of communities that lack basic healthcare.  Over 1.5 billion people have access to some rudimentary heath care, yet over 4 billion others have none.[2] With the advent of smartphones and crowdsourcing it may be possible to diagnose and then treat a range of critical illnesses in places currently lacking adequate medical facilities.

     

    According to UNICEF, malaria affects 350-500 million people each year, killing in Africa upwards of 700,000 million children. Many die because the routine identification of the disease proves too costly for small rural villages. Medical diagnosis accounts for about 10 percent of all medical costs, or approximately $250 billion per year in the United States alone.[3] Beyond economic costs, delays and often inaccurate diagnosis contribute to preventable human suffering and death.

     

    Diseases observable through a simple microscope, include foodborne (e.g., worms, fungi (molds)), parasites (including helminth eggs and larvae), waterborne (e.g., Schistosoma mansoni), blood-borne (HIV; Plasmodium falciparum), as well as emerging diseases (e.g., Methicillin-resistant Staphylococcus aureus (MRSA)). The detection of cytological dysfunction through microscopy, as for example, extant in red or white blood cells, often serves as a vital step in the diagnosis of diseases such as sickle cell anemia, leukemia or the identity of microbial invasions.

     

    Medical laboratories, even those with rudimentary equipment, such as a microscope, prove costly, especially factoring in the capital, training and staffing. Reduced cost alternatives exist for specific disease detection. A case in point: a malaria test kit supplies a less costly option over microscopy. But, many rural communities cannot afford either malaria test kits or microscopes. Overall microscopes, although too expensive for poor communities, are more versatile than kits for specific detection of disease, and remain the “gold standard test for the diagnosis of malaria.”[4]

     

    Recent developments in a paper based origami-like microscope, the size of a bookmark and virtually indestructible, soon will be widely available at costs-to-produce, well under one-dollar. One print-and-fold microscope, coined Foldscope, can be assembled from a flat sheet of paper. [5]  Reportedly it can provide over 2,000X magnification with sub-micron resolution (800nm), weighing less than 8.8 g, and measuring 70 × 20 × 2 mm, fitting in a pants’ pocket. A scalable design for application-specific projects instead of a general-purpose instrument makes it suitable for field based citizen science.

     

    Microscopes alone do not solve the problem of diagnosing illnesses in communities too poor to employ doctors or clinicians with pathology backgrounds.  A link to where others might assist in diagnosis would help, provided resources were timely available.

     

    Smartphones communicate image and text information, both essential for a remote medical diagnostic system, i.e., one requiring the image of the specimen embodying the disease and patient information. A special lens can turn a smartphone into a portable handheld microscope (current minimum camera requirements are 5 megapixels), e.g., Foldscope-type microscope or alternatively, as one such device maker claims, a soft lens that sticks directly onto the camera lens on the back of the phone and allows a user to zoom into 15X magnification (shortly 150X lens will become available).[6] Although still in the research stage, fluorescent microscopes that use a physical attachment to an ordinary cell phone will shortly become available for identifying and tracking diseases, such as tuberculosis and malaria.[7]  Following the upload of an image of a microscopic specimen, there remains the task of diagnosis, which as suggested below, may be assisted through crowdsourcing.

     

    The Dawn of Crowdsourced Diagnosis

     

    Wired magazine coined the term “crowdsourcing,” in 2006 to describe the process of seeking a solution to a problem from a large community. One of the first medical crowdsourcing companies to offer medical diagnosis services is CrowdMed, a San Francisco based healthcare startup, launched during the TEDMED 2013 conference held in Washington DC. It claims to diagnose medical cases more quickly than one’s physician, and to offer individuals and insurance providers reduced healthcare costs.[8] Its website reports that it has registered over 200 active medical “detectives” that work in, or study medicine.  The detectives suggest diagnoses and “collectively vote” on the most likely ones, using a patented prediction technology that aggregates information and assigns a consensus-based probability to reach a determination.[9]

     

    Crowdsourcing might be thought of as a form of social computing as that term refers to supporting “computations” carried out by groups, where the group has the potential to exhibit judgment exceeding that of any single member. [10] Surowiecki postulates that four criteria must be satisfied: (1) each person must have private information of known facts; (2) that a crowd member opinion must not be determined by the opinions of others in the crowd; (3) that a crowd member must be permitted to specialize and draw on local knowledge; and (4) that a mechanism exists for turning private judgments into collective judgments.[11]

     

    In an ideal ensemble of medically related decision makers, we might admit only expert participation into the activity. But a crowdsource limited to trained pathologists is not feasible, especially when financial remuneration does not exist. In crowdsourcing, volunteers self-select or at least provide largely unverified qualifications for their inclusion in the group. Optimistically, a mission might draw upon reasonably qualified volunteers drawn from disparate biomedical occupations, perhaps including trained pathologists. What is not included among the Surowiecki criteria is the ability to assess a member’s lack of knowledge or more candidly ineptitude in forming an opinion; essentially controlling what a member may not know, and which may bear on the integrity of the member’s quality of judgment.

     

    Crowdsourcing a biological specimen constrains volunteers to decide a classification, not in some absolute reductionist fashion, but based on probability, often subjective, that when after considering all the evidence, the artifact under observation likely belongs in one class versus another. And as indicated, the error in classification may have its origins in the lack of a full appreciation of the significance of the evidence.

     

    In a 1975 experiment in subjective probability, E.C. Capen polled over 1,200 people that had been invited to estimate dates, values, and quantities, about which they had some passing acquaintance, but were not certain.[12] Each subject was asked a series of 10 questions ranging from how many cars were registered in California in 1972 to the driving distance between Los Angeles and New Orleans. Individuals were asked to give ranges of estimates that supposedly included the true value; for example, to put a 90% confidence range on: the year St. Augustine was settled by Spaniards.  A response might set the range as 1500 to 1550, inferring only a 10% chance the city was settled before or after those dates.

     

    Capen found:  (1) over 350 participants had no idea how to describe uncertainty; (2) subjects who were uncertain about answers were unaware of the degree of their uncertainty; (3) most could not distinguish between a 30% and 98% confidence interval; (4) the more knowledge a subject had about a topic, the larger the chosen confidence interval; and (5) a universal tendency to understate the interval existed (i.e., an overestimate about the precision of one’s knowledge). Any medical diagnosing system that relies on crowdsourcing needs to consider Capan’s findings to minimize levels of false positives and false negatives.

     

    Although no systems currently utilize crowdsourcing methods for the identification of biological specimens, analogous solutions for improving the effectiveness of diagnosis of illness using a crowd, fall into three categories: (a) testing/qualifying volunteers before accepting them into the crowdsource community, (b) creating a data base to compare patient symptoms, and (c) establishing rules to determine the quality of an identification of biological specimens.

     

    As to solution (a), query whether testing might discourage participation? This is an open question, and one that should not be dismissed, out of hand. However, the integrity of decisions by members of a crowdsourced community might be quality controlled in real time, as suggested by solutions (b) and (c) provided a system had measurement criteria that reflected an acceptable performance level.

     

    Utilizing the U.S. Patent and Trademark Office site, three patent applications were found that dealt with crowdsourcing decisions falling into the (b) solution category. Each takes a different approach, but reveals collectively a practical shortcoming in qualifying a decision based on rule setting.

     

    Zziwa, U.S. Pat. Application 20130253940 discloses collecting and storing electronic data for a user seeking to obtain a medical diagnosis by applying stored rules. [13] Experts create the rules, which are assigned a trust factor dependent on the opinions of the users (presumably the individual or health care worker searching for a cure). Explicit rules may not lead to the effective identification of biological specimens, in part because classification depends on a complicated mix of morphology, color, texture, and other cytological features that make categorical rule sets impracticable in what remains largely heuristic, learned through education, training and experience. In fact the Zziwa type solutions appear to counter Surowiecki’s advice that a crowd opinion should be independent of others, and this would logically extend to user’s opinions.

     

    At least one patent disclosure assigns different weights to responses received from peers as opposed to a trust factor tied to a diagnosis, while another assigns a trust score to each member of the participating group, the trust score based on completion of a crowdsourcing activity to establish a level of trust earned. Neither patent application bases volunteer’s performance on the conformity to a peer consensus.[14]

     

    A Proposal for Improved Credentialing

     

    We propose crowdsourcing medical biological identification by comparing the identification to a classification norm as established by others in the peer group. In this method a processor assigns a weight to the identification based on prior observational accuracy of the volunteer.[15] The system determines the weight as a function of a qualification of an individual, which includes two components (1) education, training, experience, years in the field of the biological or medical arts and (2) the number of times the individual on prior occasions selected an identification that agreed with the majority.

     

    The system rank orders biological classification on the basis of a frequency of chosen occurrence. It then compares the crowdsource volunteer to the majority choice. If the crowdsource volunteer aligns with the majority, the system increments the crowdsource volunteer’s credential, applying the increase to a subsequent biological classification or diagnosis. If a volunteer were previously credited a weight of 10, the weight may be stepped up to 11, applied the next time the volunteer engages in an analysis. Likewise if the crowdsource volunteer does not align with the most frequent diagnosis, the system decreases the volunteer’s credit.

     

    In summary, crowdsourcing judgment under the proposed method would draw upon a population of reasonably qualified volunteers from related fields (such as highly skilled pathologists, to college biology students, to nurses and retired medical service personnel); collecting the opinions of the crowdsource volunteers and weighting them based upon qualifications and on the proximity of their biological classification or diagnosis to a mean peer assessment; forming a biological classification or diagnosis based on a statistical parameter, such as a weighted frequency of a diagnosis (mean peer diagnosis) occurring among the crowdsource volunteers; and then transmitting the identification/diagnosis of the specimen to the caregiver, who may prescribe a drug, therapy or medical test.

     

    Conclusion

     

    As we apply technology to improve health and extend lifespans, disparities in delivery persist, splitting the world into “haves” and “have nots.” Undoubtedly large populations that do not have access to the most rudimentary laboratory technology will exist for the foreseeable future; nevertheless, technologically sophisticated societies have a responsibility to deploy tools that might offer a quantum improvement in the well-being of underserved communities.  Low cost microscopy in combination with crowdsourced identification of biological specimens may prove a step in this direction.

    REFERENCES

    [1] “Smartphones have considerable computing power to perform image analysis, and built-in cell phone cameras can be adapted to function as microscope objectives. Publications . . . show the potential of cell phones to become efficient front-ends for driving sophisticated diagnostics into the rural medical system.” Micro diagnostic technologies set to make a macro impact on African health, Dec. 19, 2011; http://blogs.rsc.org/lc/2011/12/19/micro-diagnostic-technologies-set-to-make-a-macro-impact-on-african-health/.

     

    [2]  Stephen Oesterle, Medtronic’s VP of medicine and technology, remarks at The 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Aug. 26-30).

    [3] Managed Care, Lynn Feldman, http://www.managedcaremag.com/archives/0905/0905.diagnosis.html (Last visited, 06/20/2014).

    [4] Rosenthal, Philip J., How Do We Best Diagnose Malaria in Africa?Am J Trop Med Hyg 2012 vol. 86 no. 2 192-193; Wongsrichanalai C, Barcus MJ, Muth S, Sutamihardja A,Wernsdorfer WH, 2007. A review of malaria diagnostic tools: microscopy and rapid diagnostic test (RDT). Am J Trop Med Hyg 77: 119–127.

    [5]http://www.foldscope.com/; Foldscope, a trademark of The Board of Trustees of the Leland Stanford Junior University AKA Stanford University).

    [6] Turn Your Smartphone Into a Microscope With Single Lens; http://mashable.com/2013/09/13/microscope-phone-lens/(Lasr Visited 9/8/2014).

    [7] A Cell-Phone Microscope for Disease Detection-A cheap smart-phone microscope could bring fluorescent medical imaging to areas with limited access to health care; http://www.technologyreview.com/news/414460/a-cell phone-microscope-for-disease-detection/(Last visited 06/23/2014).

    [8] The company received over $2.4 million from some of Silicon Valley’s top venture capital firms such as NEA, Andreessen Horowitz, Greylock Partners, SV Angel, Khosla Ventures and Y Combinator. https://www.crowdmed.com/(Last visited 06/23/2014).

    [9] U.S. Pat. 8,285,632. The theory rests on the assumption that when a sufficiently large population bets real or fake money on predictions their bets will reflect their confidence and, in aggregate, create a robust predictive scheme.

    [10] In mid-2011, a U.S.-government-sponsored forecasting efforts for improving collective intelligence,. Under the auspices of this initiative, The Good Judgment Team, based in the University of Pennsylvania and the University of California Berkeley, recruit individuals from a broad set of backgrounds, to reach its goal of accurately predicting trends and events throughout the world. See, https://www.goodjudgmentproject.com/(Last visited 9/8/2014).

    [11] Surowiecki, James. The Wisdom of Crowds. Anchor Books (2005).

    [12] E.C. Capen, “The Difficulty of Assessing Uncertainty,” Journal of Petroleum Technology, 28 (8):843-850,  http://www.spe.org/ejournals/spe/EEM/2010/04/SPEEM_Apr10_SecondLook.pdf,  (Last visited 01/02/2014).

    [13] Zziwa, U.S. Pat. Application 20130253940 discloses collecting and storing electronic data from a search user seeking to obtain diagnosis, applying stored diagnosis rules to the electronic data to identify possible diagnoses.

    [14] Halterman U.S. Pat. Application  20120245952 describes a doctor using a smartphone to contact other doctors (e.g. the “crowd”) on smartphones, with information on the patient to obtain feedback on how to diagnose the patient and whereby future patient cases can leverage the feedback to present a doctor with information based on the feedback; Marins, et al, U.S. Pat. Application  20120284090 discloses a system and method for verifying that individual participants complete one or more crowd sourcing activities in a manner that is accurate and/or correct, and granting incentives to participating users based on accurate and/or correctly completed activities.

    [15] Carvalko, et al, U.S. Pat. App no. 14322006 entitled: U.S. Pat. Appl., System and Method for Crowdsourcing Biological Specimen Identification.

    Join the Mailing List

    Enter your email address below to sign up for the JOE CARVALKO Mailing List.

      eBooks

      Read Joe Carvalko's releases as eBooks. You've enjoyed his work in print and now-for the first time- digitally on Kindle or Nook. Read More »

      Facebook