Saturday, November 4, 2017

Mesic habitat: Stanford - "Surfacing Human Signals: The Convergence of Engineering and Health" in a Google Streetview / Maps / Earth with TIME SLIDER married conceptually with OpenSim/Second Life for group build-ability, but realistic, with species' AVATAR BOTS - e.g. for genomic engineering and robotics, etc. - both realistic, and fantastic/imaginary/creative, (and at the cell, atom and subatomic levels too) - and in all 7,099 living languages for STEM research, and as a classroom, as well as online teaching hospitals for World University and School's CC MIT OCW-centric online Medical Schools (planned in ~200 countries' official languages) - and hopefully with Stanford Medicine OpenCourseWare in all these languages for online MD medical degrees, Clinical Trials online in 7,099 living, Modeling brains, Actual/Virtual https://twitter.com/HarbinBook ethnographic project, Visit the Harbin Hot Springs' gatehouse in Streetview here - https://twitter.com/HarbinBook and http://bit.ly/HarbinBook - and "walk" down the road to Middletown CA ~4 miles away, Developing a REALISTIC VIRTUAL HARBIN will be great FOR further SURFACING HUMAN SIGNALS (re Naked Harbin Ethngraphy: Hippies, Warm Pools, Counterculture, Clothing-Optionality & Virtual Harbin)


Dear Jessica (and Bob), 

Thanks for your fascinating Project Baseline talk yesterday at Stanford Medicine - http://events.stanford.edu/events/719/71905/ ...

Stanford's 19th Annual Fogarty Lecture: Jessica Mega, Verily Life Sciences 11.3.17

https://youtu.be/oSxawCSpqKI

The Stanford Medicine - 19th Annual Thomas J. Fogarty MD Lecture: Focus on Innovation featuring Jessica L. Mega, MD, MPH, Verily Life Sciences (Google) - "Surfacing Human Signals: The Convergence of Engineering and Health."

Very nice to meet and talk with you afterward, as well, Jessica. 

It was great to hear about your ginormous Verily Google-centric project - "Surfacing Human Signals: The Convergence of Engineering and Health" - http://events.stanford.edu/events/719/71905/ and that you know Bob Harrington MD, cardiologist and chair of Stanford Medicine (whom I'm including in this email) - and eg https://www.forbes.com/sites/larryhusten/2015/05/11/prominent-harvard-cardiologist-moves-to-google-x-to-head-large-study/#b60bb9c2c660 - and furthermore that there's a Duke University role in the project you're laying out.

In asking you afterward about digitally organizing your work and the convergence of engineering and health with a realistic virtual earth, I wonder about the role Google Streetview / Maps / Earth with TIME SLIDER married conceptually with OpenSim/Second Life for group build-ability, but realistic, with species' AVATAR BOTS - e.g. for genomic engineering and robotics, etc. - both realistic, and fantastic/imaginary/creative, (and at the cell, atom and subatomic levels too) - and in all 7,099 living languages for STEM research, and as a classroom, as well as online teaching hospitals for World University and School's CC MIT OCW-centric online Medical Schools (planned in ~200 countries' official languages) - and hopefully with Stanford Medicine OpenCourseWare in all these languages for online MD medical degrees. 

Here's a recent Tweet with Bob Harrington about WUaS's plans to facilitate online clinical trials with people in all 7,099 living languages - https://twitter.com/WorldUnivAndSch/status/926635543923798016 - and the beginning "Clinical_Trials_at_WUaS_(for_all_languages)" wiki subject at World University and School - http://worlduniversity.wikia.com/wiki/Clinical_Trials_at_WUaS_(for_all_languages) , before WUaS moves into our new wiki - https://wiki.worlduniversityandschool.org/wiki/Nation_States - where each country here will become a major online wiki CC MIT OCW-centric university in the countries' official main languages. 

With regards to your talk's digital retinopathy example and Project Baseline examples, and a realistic virtual earth, and machine learning, I'd like to hypothesize we could model every eyeball virtually from every human who came into the physical clinic, which you mentioned (and later do this from home out of people's smartphone screens?), into avatar bot bodies/eyes in a realistic virtual earth with time slider, conceptually and virtually, and then apply machine learning to these 3D images, to recognize and learn for subsequent research, diagnosis and clinical therapeutic interventions (eg for virtual glucose monitors - https://www.medscape.com/viewarticle/887737) - and even eventually engineer individual's eyes virtually for developing therapies with what the machine learning and MDs learn. 

With regards to the Project Baseline data you're collecting database-wise, Jessica, all of this could correspond to virtual avatar bot bodyminds in this realistic virtual earth with time slider - as a kind of patient health charting / data even. And from these healthy baselines, further clinical protocols for wellness (e.g. improving diabetic retinopathy) could emerge. These would also connect to the blockchain ledger (and even with a cryptocurrency - all Google-Stanford-Harvard-centric). Here's more about how the blockchain ledger works for health care - https://scott-macleod.blogspot.com/2017/10/gloriosa-genus-stanford-medicine-grand.html - as well as my email to Bob H after our meeting in September at Stanford Medicine. (I'm also including Julian Dumitrascu, our new emerging startup WUaS Corporation CEO, in Romania, in this email, too; here are WUaS's 14 planned revenue streams on both our A) World University and School 501 c 3 non-profit wing, and B) our new for-profit general stock WUaS Corporation wing - https://worlduniversityandschool.blogspot.com/2016/01/14-planned-wuas-revenue-streams.html?view=classic - both planned in all ~200 countries official languages and in all 7,099 living languages as academic markets).

In terms of actually modeling brains, by way of comparison with modelling eyeballs, and in a Google-centric platform, here's Google's/Stanford's Thomas Dean's - "Automatically Inferring Meso-scale Models of Neural Computation" https://www.youtube.com/watch?v=HazJ7LHihG8 (Stanford Neuroscience conference Oct 2016) - accessible here too with related further ideas about how the above realistic virtual earth for STEM research would work - https://scott-macleod.blogspot.com/2017/04/red-rainbowfish-mit-bachelors-degree-in.html and https://scott-macleod.blogspot.com/2017/03/molecule-stanford-googles-tom-dean.html .

WUaS's plans to create a realistic virtual earth with TIME SLIDER at the street view / atomic / cellular (and subatomic) levels emerges out of my actual / virtual Harbin ethnographic book project, and to see the beginnings, conceptually, of this Streetview-centric virtual earth for STEM research, visit the Harbin Hot Springs' gatehouse in Streetview here - https://twitter.com/HarbinBook and http://bit.ly/HarbinBook - and "walk" down the road to Middletown CA ~4 miles away. That we can all add photos (I added the photo of the Gatehouse on the left here from 2001) and videos, for example, will allow researchers to further develop this realistic virtual earth together. Google's TensorFlow AI / machine learning software, and Tom Dean's modeling of the fly brain will all further come together in this realistic virtual earth/universe/cosmos conceiving, I think. I think too that film and video at especially the atomic (eg electron microscopy) and cellular as well as street view levels will become convertible into a #D interactive virtual world for research (and in this Streetview-centric, conceptually, realistic virtual earth with avatar bots). 

Since you went to Stanford as an undergraduate in the 1990s, Jessica, did you ever happen to visit Harbin? 

Developing a REALISTIC VIRTUAL HARBIN will be great FOR further SURFACING HUMAN SIGNALS (re Naked Harbin Ethngraphy: Hippies, Warm Pools, Counterculture, Clothing-Optionality & Virtual Harbin) 

I just found this Nov 2, 2017 video online with you and Medscape's Eric Topol  - https://www.medscape.com/viewarticle/887737 - and think all of the above could be the infrastructure you might be interested in (at the 9 minute mark). Even tele-surgery could emerge out of this realistic virtual earth - say from Stanford to a ship in the Pacific ocean. (See, for example, the Da Vinci surgery robot here - https://scott-macleod.blogspot.com/2017/05/frigatebird-saw-my-first-robotic.html and https://scott-macleod.blogspot.com/2017/10/whirlpool-galaxy-we-almost-gave-up-on.html ). 

Jessica and Bob, might we potentially meet or conference call about developing online Medical Schools (perhaps with Project ECHO - https://scott-macleod.blogspot.com/2017/06/bezoar-ibex-capra-aegagrus-stanford.html - re Stanford Grand Rounds earlier this year) with online Teaching Hospitals with robotic surgery in a realistic virtual earth at some point in the near future? 

Great connecting face-to-face in person with you, Jessica. And looking forward to communicating further with you about this. 

Thank you. 

Best regards,
Scott



-- 
- Scott MacLeod - Founder & President  

- World University and School


- CC World University and School - like CC Wikipedia with best STEM-centric CC OpenCourseWare - incorporated as a nonprofit university and school in California, and is a U.S. 501 (c) (3) tax-exempt educational organization. 


IMPORTANT NOTICE: This transmission and any attachments are intended only for the use of the individual or entity to which they are addressed and may contain information that is privileged, confidential, or exempt from disclosure under applicable federal or state laws.  If the reader of this transmission is not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited.  If you have received this transmission in error, please notify me immediately by email or telephone.







*


*


*

...



No comments: