<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Epistemology, Cognition, and the Future of Technology]]></title><description><![CDATA[the future of technology based on the principles of cognition. ]]></description><link>https://blog.davidbramsay.com/</link><generator>Ghost 3.20</generator><lastBuildDate>Tue, 03 Mar 2026 18:53:43 GMT</lastBuildDate><atom:link href="https://blog.davidbramsay.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Setting Network Priority Order in Sonoma]]></title><description><![CDATA[<p>I have a local network set up at my university in the makerspace (called <code>IoT_IRL</code>), but the rest of the university uses <code>eduroam</code>.  I want to auto-connect to both, but <code>IoT_IRL</code> should have priority when I'm in the makerspace, and Apple has made it such that priority is</p>]]></description><link>https://blog.davidbramsay.com/setting-network-priority-order-in-sonoma/</link><guid isPermaLink="false">66f1823a66088ca5264def7c</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Mon, 23 Sep 2024 15:10:23 GMT</pubDate><content:encoded><![CDATA[<p>I have a local network set up at my university in the makerspace (called <code>IoT_IRL</code>), but the rest of the university uses <code>eduroam</code>.  I want to auto-connect to both, but <code>IoT_IRL</code> should have priority when I'm in the makerspace, and Apple has made it such that priority is no longer something you can set in their GUI (it will just connect to any 'preferred network' with the strongest signal, which is never the one I want).</p><p>To fix this, I had to:</p><p><strong>1.  Figure out the security type of the two networks.</strong></p><p>Connect to each network, then use airport to get the security type. Note it</p><pre><code>/System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport -I
</code></pre><p>For me, <code>eduroam</code> lists <code>ft-wpa2</code> and IoT_IRL lists <code>wpa3-sae</code>.</p><p><strong>1.  Remove and re-add them to en0 at different priority levels.</strong></p><p>Now remove them both and add them back at different priorities; using 1 for the higher and 3 for the lower priority.  Note the security setting types, network name, and priority level from above here:</p><pre><code>networksetup -removepreferredwirelessnetwork en0 eduroam
networksetup -removepreferredwirelessnetwork en0 IoT_IRL
networksetup -addpreferredwirelessnetworkatindex en0 IoT_IRL 1 wpa3-sae
networksetup -addpreferredwirelessnetworkatindex en0 eduroam 3 ft-wpa2

</code></pre>]]></content:encoded></item><item><title><![CDATA[Transparency in Videos from After Effects on Mac for Wordpress]]></title><description><![CDATA[<p>I struggled to get transparent videos working on wordpress for both mobile Safari and Chrome.</p><p>I was finally able to get it to work by:</p><p>(1) exporting a full resolution Apple ProRes 4444 file from After Effects.</p><p>(2) Use the Rotato tool to convert it:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://rotato.app/tools/converter"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Alpha channel tool: Create transparent</div></div></a></figure>]]></description><link>https://blog.davidbramsay.com/transparency-in-videos-on-wordpress/</link><guid isPermaLink="false">660c508b66088ca5264def00</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Tue, 02 Apr 2024 18:48:32 GMT</pubDate><content:encoded><![CDATA[<p>I struggled to get transparent videos working on wordpress for both mobile Safari and Chrome.</p><p>I was finally able to get it to work by:</p><p>(1) exporting a full resolution Apple ProRes 4444 file from After Effects.</p><p>(2) Use the Rotato tool to convert it:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://rotato.app/tools/converter"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Alpha channel tool: Create transparent videos for all browsers | Rotato</div><div class="kg-bookmark-description">Transparent videos can be confusing, so we made a 1-click, privacy-first, free tool to create videos with transparency that work in all browsers.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://rotato.app/icon.png?3cb9e83555850b3e"><span class="kg-bookmark-publisher">Rotato</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://rotato.app/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fconverter-hero.2d2b747d.png&amp;w=3840&amp;q=75"></div></a></figure><p>(3) upload both, add your video tag, and modify it via html (there is likely an <code>edit as html</code> button on your block element) so that it looks like the following:</p><pre><code>&lt;figure class="wp-block-video" style="margin-right:0;margin-left:0"&gt;
&lt;video autoplay loop muted playsinline preload="auto" id="customedited"&gt;

&lt;source src="http://principledinterfaces.com/wp-content/uploads/sites/5/2024/04/vid-safari.mp4" type='video/mp4; codecs="hvc1"'&gt;

&lt;source src="http://principledinterfaces.com/wp-content/uploads/sites/5/2024/03/vid-chrome.webm" type='video/webm'&gt;

&lt;/video&gt;&lt;/figure&gt;</code></pre><p>It requires all of the attributes <code>autoplay loop muted playsinline</code> to work.</p><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2024/04/image.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2024/04/image.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2024/04/image.png 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2024/04/image.png 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2024/04/image.png 2400w"></figure><p>I believe it's also possible to export <code>webm</code> directly using Adobe Media Encoder; it's also possible to use <code>Shutter Encoder</code> with a transparent file, set to <code>h.265</code> and <code>mp4</code> and choosing the GPU acceleration (which will then allow you to <code>encode Alpha channel</code> in the advanced options).  This should also work if you have your tags right and if the Rotato tool is no longer available.</p>]]></content:encoded></item><item><title><![CDATA[Empatica E4 Teardown]]></title><description><![CDATA[<p></p><p>Before I attempt to open up devices I always like to take a look at some pictures so I know what to expect; for the Empatica E4, there weren't any online anywhere.  I went ahead and took some in case anyone else wants to replace the battery without shipping it</p>]]></description><link>https://blog.davidbramsay.com/e4-teardown/</link><guid isPermaLink="false">64644d374fae010d4f87ebe0</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Wed, 17 May 2023 05:09:39 GMT</pubDate><content:encoded><![CDATA[<p></p><p>Before I attempt to open up devices I always like to take a look at some pictures so I know what to expect; for the Empatica E4, there weren't any online anywhere.  I went ahead and took some in case anyone else wants to replace the battery without shipping it to Italy.</p><p>The mechanical design is nice, I think the castellated addition for the optics is a great idea, and they have some nice spring loaded connections to handle the geometry of their sensors.  There are two Microchip PIC24Fs as the main MCUs and it uses the silabs BLE112A for comms.  It'd be fun to spend a little more time digging into the design, but I'm strapped for time at the moment; maybe in the near future.</p><p>For me, the goal was to get my failing one back up and running after what appeared to be a simple failed lipo battery.  You can see in the pictures the battery is bulging; you can also see they use the straight up cell without the normal protection circuit that comes with many off the shelf lipos.  I ordered a replacement 300mAh battery and soldered on <a href="https://www.amazon.com/dp/B01DUC1I7S?psc=1&amp;ref=ppx_yo2ov_dt_b_product_details">this JST SH 1.0 Connector</a> and it worked no problem after a little fiddling (gotta get the fit right, the had some uniquely thin 300mAh batteries– and that lack of a protection circuit helps with the size).</p><p>Anyway, here are some pics; sorry they're not top-notch, my microscope is packed up for my upcoming move.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2023/05/IMG_9629.JPG" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2023/05/IMG_9629.JPG 600w, https://blog.davidbramsay.com/content/images/size/w1000/2023/05/IMG_9629.JPG 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2023/05/IMG_9629.JPG 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2023/05/IMG_9629.JPG 2400w"><figcaption>Three parts: the main enclosure, a sandwich of two boards with a lipo in between, and the outer wall. Be careful when unscrewing, the screws have tiny o-rings on them that are pretty important for a tight fit when you're putting it back together.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2023/05/IMG_9626.JPG" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2023/05/IMG_9626.JPG 600w, https://blog.davidbramsay.com/content/images/size/w1000/2023/05/IMG_9626.JPG 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2023/05/IMG_9626.JPG 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2023/05/IMG_9626.JPG 2400w"><figcaption>I believe these two traces, surrounded by ground, are injection molded into the bracelet, and actually wrap all the way around through the wristband to connect to the snap on electrodes on the underside of the wrist to measure EDA. Pushbutton on the far left of the picture, lightpipe in the middle.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2023/05/IMG_9633.JPG" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2023/05/IMG_9633.JPG 600w, https://blog.davidbramsay.com/content/images/size/w1000/2023/05/IMG_9633.JPG 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2023/05/IMG_9633.JPG 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2023/05/IMG_9633.JPG 2400w"><figcaption>Skin temperature secondary PCB and injection molded pogo pins that connect to the board just above the lipo.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2023/05/IMG_9638.JPG" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2023/05/IMG_9638.JPG 600w, https://blog.davidbramsay.com/content/images/size/w1000/2023/05/IMG_9638.JPG 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2023/05/IMG_9638.JPG 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2023/05/IMG_9638.JPG 2400w"><figcaption>A nice sandwich design with a ribbon cable connection and easy to swap 300mAh battery.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2023/05/IMG_9639.JPG" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2023/05/IMG_9639.JPG 600w, https://blog.davidbramsay.com/content/images/size/w1000/2023/05/IMG_9639.JPG 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2023/05/IMG_9639.JPG 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2023/05/IMG_9639.JPG 2400w"><figcaption>Without the battery.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2023/05/IMG_9641.JPG" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2023/05/IMG_9641.JPG 600w, https://blog.davidbramsay.com/content/images/size/w1000/2023/05/IMG_9641.JPG 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2023/05/IMG_9641.JPG 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2023/05/IMG_9641.JPG 2400w"><figcaption>We can see a PIC24F on the inside face along with the ribbon cable, not a lot else on the inside faces of the two PCBs besides that MCU, the pogo pins, and the battery connector. Some easy to access test points there too.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2023/05/IMG_9650.JPG" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2023/05/IMG_9650.JPG 600w, https://blog.davidbramsay.com/content/images/size/w1000/2023/05/IMG_9650.JPG 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2023/05/IMG_9650.JPG 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2023/05/IMG_9650.JPG 2400w"><figcaption>the top PCB featuring the majority of the circuitry. Silabs BLE112A for comms, another PIC24F, SMD spring loaded connections out to that EDA ribbon on the far left, button on the right, assorted other supporting circuitry.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2023/05/IMG_9634.JPG" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2023/05/IMG_9634.JPG 600w, https://blog.davidbramsay.com/content/images/size/w1000/2023/05/IMG_9634.JPG 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2023/05/IMG_9634.JPG 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2023/05/IMG_9634.JPG 2400w"><figcaption>The bottom board with the nice castelatted via sensor black sensor board, pogo connections peaking through from the second board below, and three more spring connectors to the tiny temperature sensing board. On the black section we have optics for PPG sensing.</figcaption></figure>]]></content:encoded></item><item><title><![CDATA[Handwriting OCR in Python]]></title><description><![CDATA[<p>I'm working on a project that requires handwriting recognition (sending texts by writing them), and I've been exploring off-the-shelf options to recognize my own writing.  Let's take a look at the contenders:</p><h3 id="easyorc">EasyORC</h3><p>EasyOCR doesn't seem to be targeted at handwriting, so I wasn't expecting this to do particularly well.</p>]]></description><link>https://blog.davidbramsay.com/handwriting-ocr-in-python/</link><guid isPermaLink="false">6421a4a44fae010d4f87e8e3</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Mon, 27 Mar 2023 18:43:07 GMT</pubDate><content:encoded><![CDATA[<p>I'm working on a project that requires handwriting recognition (sending texts by writing them), and I've been exploring off-the-shelf options to recognize my own writing.  Let's take a look at the contenders:</p><h3 id="easyorc">EasyORC</h3><p>EasyOCR doesn't seem to be targeted at handwriting, so I wasn't expecting this to do particularly well.  It does target reading images of text with 80+ languages.</p><p><a href="https://github.com/JaidedAI/EasyOCR">Link to Github Repo.</a></p><h3 id="simplehtr">simpleHTR</h3><p>Harald Scheidl's PhD work implemented as a handwriting recognition system.  I think the layout of my text might be difficult for simpleHTR to get; let's give it a shot though!  For this I downloaded the line model (which should handle multiple lines of handwritten text) that was generated May 26 2021.</p><p><a href="https://github.com/githubharald/SimpleHTR">Link to Github Repo.</a></p><h3 id="tesseract">Tesseract</h3><p>Some open source code, the original version of this library was developed at HP in the eighties and nineties, and open-sourced in 2005.  Lead development was taken over by Google from then until 2018.  Seems like it could be a great contender!</p><p><a href="https://github.com/tesseract-ocr/tesseract">Link to Github Repo.</a></p><h3 id="google-api">Google API</h3><p>Ship it to Google and pay $1.50/1000 instead of using something free and open source.  Of course, no github repo available.</p><h2 id="simple-test-results">Simple Test Results</h2><p>Here's a script I wrote to compare and contrast these different systems:</p><pre><code>from pdf2image import convert_from_path
import cv2
import numpy as np 
import easyocr
import pytesseract
from autocorrect import Speller
from google.cloud import vision

#MUST 'brew install poppler'
#MUST 'brew install tesseract'

#to test simpleHTR, I created a PNG of the text
#and ran 'python main.py --img_file ../handwritingTestImage.png'
#in its src folder after importing the line model

#for google vision API, install google CLI using
#
#"curl https://sdk.cloud.google.com | bash"
#"gcloud init"
#"gcloud projects create dramsayocrtext" #this name must be unique to your project, of all projects ever created in gcloud
#"gcloud auth login"
#"gcloud config set project dramsayocrtext"
#"gcloud auth application-default login"
#"gcloud auth application-default set-quota-project dramsayocrtext"
#
#enable API in google cloud console
#enable billing in google cloud console ($1.50/1000 images)
#
#pip3 install google-cloud-vision

SHOW_IMAGES = False
SHARPEN = False
SPELLCHECK = False
OCR_ENGINE = 'GOOGLE' # 'GOOGLE', 'EASYOCR', 'TESSERACT'

def show(img):
    cv2.imshow("img", img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()



# grab handwriting pdf with convert_from_path function
images = convert_from_path('8234567890.pdf', poppler_path="/usr/local/Cellar/poppler/23.03.0/bin")
img = np.array(images[0])

# Crop to just the part with the handwriting
img = img[2*395:2*1170,0:2*827]

# Greyscale and Sharpen
if SHARPEN:
    print('using greyscale and threshold to sharpen.')
    img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    if SHOW_IMAGES: show(img)

    sharpen_kernel = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]])
    img = cv2.filter2D(img, -1, sharpen_kernel)
    img = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
    if SHOW_IMAGES: show(img)

# OCR
if OCR_ENGINE=='EASYOCR':
    print('using easyocr.')
    reader = easyocr.Reader(['en'],gpu = False)
    ocr_output = reader.readtext(img,paragraph=True)[0][1]

elif OCR_ENGINE=='TESSERACT':
    print('using tesseract.')
    img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    ocr_output = pytesseract.image_to_string(img_rgb)

elif OCR_ENGINE=='GOOGLE':
    print('using google cloud.')
    success, img_jpg = cv2.imencode('.jpg', img)
    byte_img = img_jpg.tobytes()
    google_img = vision.Image(content=byte_img)

    client = vision.ImageAnnotatorClient()
    resp =  client.text_detection(image=google_img)
    ocr_output = resp.text_annotations[0].description.replace('\n',' ')

else:
    ocr_output = ' ERROR: no ocr tool selected. Choose "GOOGLE","EASYOCR", or "TESSERACT"'

# Spellcheck
if SPELLCHECK:
    print('using autocorrect.')
    spell = Speller(only_replacements=True)
    final_text = spell(ocr_output)

ocr_output += ' (OCRed in python; forgive typos)'

print('-'*10)
print(ocr_output)
print('-'*10)
</code></pre><p>We're going 80/20 here, and I only actually care about how well these work on my handwriting.  This is my initial test image, which I wrote on a ReMarkable tablet, filled with the kinds of phrases I might actually use:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2023/03/image.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2023/03/image.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2023/03/image.png 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2023/03/image.png 1600w, https://blog.davidbramsay.com/content/images/size/w1668/2023/03/image.png 1668w"><figcaption>"This is me writing a text message back to someone. I love you and miss you. Let us meet for ice cream at 4PM at Toscaninis. See you then! -David :)</figcaption></figure><h2 id="results">Results</h2><p><strong>EasyOCR: <em>"wnrivg 0 les+ Ths 1S Mq bec o Sme 6nQ . T MasSas? anj keb Iove miss Yu X , Wea} (c {C9 Crean ak US Toscnims . Sea 4PM 0t YX Hen ' _Davi&amp;"</em></strong></p><p><strong>EasyOCR+Sharpen: <em>"waring a Jez+ Ths 1S MQ Becl to Sme &amp;n9 1 MSSGS anj leb Iove YbU Wiss Xu , wes F Cac iC Cream at US Toscanims Sea 4pM at Y Hlen ' ~Davia"</em></strong></p><p><strong>Tesseract:</strong> <strong><em>"Tw aces iS we paring a texX snipe DAS ie Ss On. - Lob 0, {ove LEPM od Cor ice Cream ok Tos canims ae . ye Hye! ai =David ©"</em></strong></p><p><strong>Tesseract+Sharpen: <em>"Thm eae iS Wwe wary a ie cong Bee te &amp; : weons. L Lob 0 , {ove LEPM od Gor ice Cream ok Tos canims ee i Hye! ai =David ©"</em></strong></p><p><strong>simpleHTR:</strong> <strong><em>"c"</em></strong></p><p>simpleHTR doesn't seem to handle this multi-line image well.  Just to see what would happen if I added a simple pre-processing step to do individual lines of text, I also tried simpleHTR with this image:</p><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2023/03/image-2.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2023/03/image-2.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2023/03/image-2.png 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2023/03/image-2.png 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2023/03/image-2.png 2400w"></figure><p>simpleHTR: <strong><em>"This 1s e wniving Sert"</em></strong></p><p>None of the local options work well enough.  Luckily, we can just use Google's actual API.</p><p><strong>Google Vision API: <em>"me writing a text back to someone. I let This is message love you and miss you. us weet for ice cream at 4PM at Toscaninis. See you then! -David"</em></strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2023/03/image-3.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2023/03/image-3.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2023/03/image-3.png 1000w, https://blog.davidbramsay.com/content/images/size/w1056/2023/03/image-3.png 1056w"><figcaption>It gets all the words! Unfortunately, 'Let' seems to have been bumped up to a middle line.</figcaption></figure><p>It gets the words (mostly), but the line sensitivity is really high.  This happens with <em>both</em> <code>text_detection</code> and <code>document_text_detection</code>.    </p><h2 id="final-solution">Final Solution</h2><p>We have two options: help the OCR algorithm or do the line segmentation ourselves.  Since google reports the coordinates of each word, it actually wouldn't be too hard to do the line segmentation ourselves (since we know a priori roughly how large my handwriting tends to be).  It would be nice to give the algorithm a hint about minimum line height, but that isn't currently supported.  In the name of speed, let's simply try to add lines to give the OCR algorithm a hint:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2023/03/image-4.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2023/03/image-4.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2023/03/image-4.png 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2023/03/image-4.png 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2023/03/image-4.png 2400w"><figcaption>Does this help?</figcaption></figure><p><strong>Google Vision API: <em>"This is me writing a text message back to someone. I love you and miss Let you. us weet for ice cream at 4PM at Toscaninis. See you then! -David ☺:)"</em></strong></p><p>It does seem to help; we have one out of place word (Let) and one wrong word (weet).  Passing this through the autocorrect actually made it worse ('weet' to 'went' instead of 'meet').  It also reveals how naturally slanty my writing can be without lines to write on.  For now, I'm going to simply give myself some lines in my PDF template and we'll see how well it works!</p>]]></content:encoded></item><item><title><![CDATA[The Facebook Emotion Contagion Study]]></title><description><![CDATA[<p>I recently published an <a href="https://blog.davidbramsay.com/facebook-contagion-data/">essay to help contextualize the data</a> from the famous 2014 <a href="https://www.pnas.org/content/111/24/8788">Facebook Emotion Contagion study</a>– a study in which Facebook researchers removed between 10 and 90% of either positive or negative emotional content on user news feeds to see if it affected their emotions. </p><p>The study sparked</p>]]></description><link>https://blog.davidbramsay.com/the-facebook-emotion-contagion-study/</link><guid isPermaLink="false">60c8c0e4c66b5c391c160f79</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Thu, 01 Jul 2021 17:18:53 GMT</pubDate><content:encoded><![CDATA[<p>I recently published an <a href="https://blog.davidbramsay.com/facebook-contagion-data/">essay to help contextualize the data</a> from the famous 2014 <a href="https://www.pnas.org/content/111/24/8788">Facebook Emotion Contagion study</a>– a study in which Facebook researchers removed between 10 and 90% of either positive or negative emotional content on user news feeds to see if it affected their emotions. </p><p>The study sparked massive outrage and articles in <a href="https://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/">The Atlantic</a>, <a href="https://www.wired.com/2014/06/everything-you-need-to-know-about-facebooks-manipulative-experiment/">Wired</a>, <a href="https://www.forbes.com/sites/kashmirhill/2014/06/28/facebook-manipulated-689003-users-emotions-for-science/?sh=20f2f7cd197c">Forbes</a>, <a href="https://www.nytimes.com/2014/06/30/technology/facebook-tinkers-with-users-emotions-in-news-feed-experiment-stirring-outcry.html">The NY Times</a>, <a href="https://www.npr.org/sections/alltechconsidered/2014/06/30/326929138/facebook-manipulates-our-moods-for-science-and-commerce-a-roundup">NPR</a>, <a href="https://www.bbc.com/news/technology-28051930">the BBC</a>, and just about every other major news organization.  It launched an ethical sub-genre about big-tech in books and papers (<a href="https://journals.sagepub.com/doi/full/10.1177/1747016115579531">1</a>, <a href="https://par.nsf.gov/servlets/purl/10184952">2</a>)– most notably, a team of <a href="https://www.nature.com/news/misjudgements-will-drive-social-trials-underground-1.15553">27 bio-ethicists wrote an op-ed in Nature defending the work</a>. It was the <a href="https://www.theguardian.com/technology/2014/dec/09/facebook-emotional-experiment-most-shared-academic-research">most shared academic research of 2014</a>.  It's been cited almost 3000 times.</p><p>As you'll see from in <a href="https://blog.davidbramsay.com/facebook-contagion-data/">my earlier essay</a>, for a quite drastic intervention, the effect on user behavior is both incredibly small <em>and</em> completely unrelated to underlying affective state.  Here I will address the ethics of the study, and how it has been (and continues to be) misused by the popular press.</p><h3 id="the-study-was-ethical">The Study was Ethical</h3><p>Prior to the Emotion Contagion Study, the prevailing theory about Facebook was one of <em>social reference– </em>most psychologists would have hypothesized that the overwhelmingly positive nature of facebook posts were making people depressed and envious when considering their own lives in comparison to curated identities. Removing positive posts and negative posts were both expected to increase user well-being; it's only <em>in retrospect, </em>with the insights from the study, that we've come to believe that removing positive content might negatively affect users.</p><p>Given this theory, the right thing for Facebook researchers to do was to (1) check if its true, and if so, (2) change how content is curated so as not to depress everyone.  It's the right thing to do.  This research answers a fundamental, extremely important question about how we should design social media.  Sharing it with the world was a generous thing to do.</p><p>The key for this kind of study is to implement it in such a way that there isn't a real risk of causing depression or significant emotional harm to users.  Such a risk– outside of the presumed, 'common-man' risk assumed by users of the platform– requires explicit consent.  </p><p>For this study, there is no evidence that any meaningful risk was incurred.  We can see that this intervention had <a href="https://blog.davidbramsay.com/facebook-contagion-data/"><em>no practical effect</em> on people's emotional states</a>; once again, the leading psychologist would have predicted <em>positive effects </em>on user well-being if anything.  In the end, no one even noticed that the intervention was taking place– that's not an indication that something powerful, mysterious, and surreptitious; it's an indication that the changes were <em>not a big deal</em>.</p><p>Few people care that facebook A/B tests button color or layout– interventions with subtle psychological and behavioral implications.  Every design choice carries with it some risk; it's our job to make an a priori best guess at the possible implications, minimize uncertainty, and consent people when risks meaningfully exceed expectations.  We tacitly agree that testing UI design changes is okay, even though it might marginally affect user behavior.  We <em>should </em>accept continuous testing, because the company is able to improve user experience.  </p><p>This is a brand of 'rule utilitarianism'– the value is high for this kind of continuous improvement, and consenting to every small change would make the experience terrible and degrade the quality of the service.  I only should be consented when the risk to me is real and unexpected.  There are presumed, reasonable, 'common-man' risks when consuming any media.</p><p>In using Facebook, you've already willingly subjected yourself to the psychological impact of a certain kind of media.  There is no evidence that this intervention<em> meaningfully</em> deviated from that core experience.  Consent in this case is like consenting people for making a specific movie scene slightly more or less violent when they're already an action movie fanatic; the implicit, presumed, accepted risk subsumes the effect of the intervention.</p><h3 id="improper-interpretations">Improper Interpretations</h3><p>This study is <em>still </em>frequently miscited in two egregious ways– (1) it's used as an example of the callous indifference of Big Tech to user well-being, and (2) it's used as a damning example of 'the power of AI to surreptitiously and powerfully manipulate people' (see i.e. Shoshana Zuboff's famous 2019 'Surveillance Capitalism' book).  </p><p>I hope that we've adequately dismissed the first of these misconceptions above.  This was research for the common good, <em>not</em> something antagonistic to users.  <a href="https://www.nature.com/articles/511265a">Defining the ethical line for consent is a nuanced process</a>, but to suggest this research was done in bad faith with nefarious intentions is disingenuous.  Facebook had no incentive to publish this result publicly if they had truly poor intentions, and it's obvious that the researchers didn't see their work as unethical.  No one would've willing subjected themselves to the mudslinging, PR nightmare that ensued from this publication.</p><p>The second conception of this study– that this is evidence of Big Tech's ability to powerfully and subliminally manipulate users– is also clearly wrong.  It follows from <a href="https://blog.davidbramsay.com/statistically-significant-does-not-mean-significant/">misconstruing 'a statistically significant effect' (which there is) with a 'powerful and important effect' (which there is clearly not)</a>, and making an invalid leap from a <a href="https://blog.davidbramsay.com/facebook-contagion-data/">behavioral measurement to an underlying emotional state</a>.  It's pushed along by a widely-held, blatantly wrong view that subtle design choices exert a powerful influence on human behavior– a view with roots in <a href="https://www.nature.com/articles/d41586-019-03755-2">discredited and inaccurate social priming research</a>. </p><p>To reiterate, this incredibly invasive intervention on emotional facebook content <a href="https://blog.davidbramsay.com/facebook-contagion-data/">has nearly negligible effects on user behavior, and no known effect on user well-being or affect</a>.</p><p>This paper <em>is not </em>evidence of either a big-tech conspiracy or a new breed of sophisticated coercion.  Its proper interpretation makes the opposite point.  With this research, Facebook took a step to understand, contextualize, and share the emotional impact they have on their users.  They demonstrated a <em>very tiny</em> behavioral effect with a dramatic intervention.  </p><p>It seems that if Facebook were to delete most of the positive posts on your newsfeed, you probably wouldn't notice, and you probably wouldn't care.</p>]]></content:encoded></item><item><title><![CDATA[Pop-Science Psychology Books are Untrustworthy]]></title><description><![CDATA[<p>I recently wrote a post outlining how the popular book <a href="https://blog.davidbramsay.com/glow-kids-and-the-crisis-in-pop-psychology/">Glow Kids undermined its credibility with a fraudulent data</a>.  I also examined an <a href="https://blog.davidbramsay.com/do-clouds-make-you-buy-furniture/">overstatement by New York Times best-selling author Robert Cialdini in his most recent book</a>.  </p><p>Pop-psychology needs to be approached with a high degree of skepticism. Poor statistical</p>]]></description><link>https://blog.davidbramsay.com/pop-psychology/</link><guid isPermaLink="false">60db863fc66b5c391c16294c</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Wed, 30 Jun 2021 18:07:46 GMT</pubDate><content:encoded><![CDATA[<p>I recently wrote a post outlining how the popular book <a href="https://blog.davidbramsay.com/glow-kids-and-the-crisis-in-pop-psychology/">Glow Kids undermined its credibility with a fraudulent data</a>.  I also examined an <a href="https://blog.davidbramsay.com/do-clouds-make-you-buy-furniture/">overstatement by New York Times best-selling author Robert Cialdini in his most recent book</a>.  </p><p>Pop-psychology needs to be approached with a high degree of skepticism. Poor statistical techniques, biased framing, and hyperbolic misinterpretation are endemic to psychology's popular science press.  In this post, I've collected a few other famous examples of popular science gone wrong.</p><h2 id="why-we-sleep">Why We Sleep</h2><p>One of my MIT mentors suggested I read Dr. Matthew Walker's 'Why We Sleep?' as part of my research.  It's a New York Times Bestseller; Bill Gates had this to say: </p><blockquote>Why We Sleep is an important and fascinating book…Walker taught me a lot about this basic activity that every person on Earth needs. I suspect his book will do the same for you.</blockquote><p>With this book, Matthew Walker launched himself into the limelight; he's appeared on 60 Minutes, Nova, BBC, NPR, and has glowing reviews across all the major news outlets.  He is single-handedly shaping the discourse on sleep, and shaping the behavior of thousands of readers as well.</p><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-10.png" class="kg-image" alt></figure><p>Why We Sleep was then <a href="https://guzey.com/books/why-we-sleep/">picked apart very thoroughly by Alexey Guzey</a>.  His post focuses on just<em> </em>fact-checking the first chapter, which he calls 'riddled with scientific and factual errors.'  The review is thorough, and I highly recommend reading it (<a href="https://statmodeling.stat.columbia.edu/2019/11/18/is-matthew-walkers-why-we-sleep-riddled-with-scientific-and-factual-errors/">Andrew Gelman recommends it too</a>).  He points to <a href="https://statmodeling.stat.columbia.edu/2019/12/27/why-we-sleep-data-manipulation-a-smoking-gun/">outright academic misconduct in the way data is presented</a> which <a href="https://yngve.hoiseth.net/articles/why-we-sleep-institutional-failure/">UC Berkeley has ignored</a>.</p><p>While Walker was forced <a href="https://sleepdiplomat.wordpress.com/2019/12/19/why-we-sleep-responses-to-questions-from-readers/">to issue some corrections</a>, this book is still out there.  Moreover, a Google search of 'Why We Sleep' doesn't include any criticism on the first page of results; only testimonials to the power of this work.</p><h2 id="thinking-fast-and-slow">Thinking Fast and Slow</h2><p>Another book– which I actually loved– is Daniel Kahneman's Thinking Fast and Slow.  Daniel Kahneman has done a lot of great scholarship in my opinion; unfortunately, he had to retract <a href="https://retractionwatch.com/2017/02/20/placed-much-faith-underpowered-studies-nobel-prize-winner-admits-mistakes/">the entire fourth chapter of his book</a>, on social priming, after publishing <a href="https://www.nature.com/news/polopoly_fs/7.6716.1349271308!/suppinfoFile/Kahneman%20Letter.pdf">an open letter in Nature warning of a looming 'train-wreck' for the field</a> when some of the core findings failed to replicate.</p><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-14.png" class="kg-image" alt></figure><p>This was largely in response to an analysis done by Ulrich Schimmack, who runs one of the best blogs on the replication crisis.  Just last year he published <a href="https://replicationindex.com/2020/12/30/a-meta-scientific-perspective-on-thinking-fast-and-slow/">a further critique of Thinking Fast and Slow</a> (2020), in which he analyzed each chapter with his 'R-Index' (a score which assesses the likelihood of replication based on the power of each cited study, though this is highly variable when used to analyze just a small numbers of studies).  The results are pretty bad.  Out of the 13 chapters analyzed, the majority (seven chapters) fall below a 50% likelihood of replication (on average for the studies in that chapter); the other six vary widely, mostly hovering in the fifties and sixties.  Those odds are not great.  Schimmack summarizes:</p><blockquote>[Kahneman's] thoughts are based on a scientific literature with shaky foundations. Like everybody else in 2011, Kahneman trusted individual studies to be robust and replicable because they presented a statistically significant result. In hindsight it is clear that this is not the case. Narrative literature reviews of individual studies reflect scientists’ intuitions (Fast Thinking, System 1) as much or more than empirical findings. Readers of “Thinking: Fast and Slow” should read the book as a subjective account by an eminent psychologists, rather than an objective summary of scientific evidence. Moreover, ten years have passed and if Kahneman wrote a second edition, it would be very different from the first one. <strong>Chapters 3 and 4 would probably just be scrubbed from the book.</strong></blockquote><p>It is worth reading Kahneman's <a href="https://replicationindex.com/2017/02/02/reconstruction-of-a-train-wreck-how-priming-research-went-of-the-rails/comment-page-1/#comment-1454">direct response and acceptance</a> of Ulrich's reporting in his 2017 post '<a href="https://replicationindex.com/2017/02/02/reconstruction-of-a-train-wreck-how-priming-research-went-of-the-rails/comment-page-1/#comment-1454">Reconstruction of a Train Wreck: How Priming Research Went off the Rails.</a>'  This is, of course, <a href="https://slate.com/technology/2016/12/kahneman-and-tversky-researched-the-science-of-error-and-still-made-errors.html">slightly ironic</a>, as Kahneman calls out underpowered research specifically in his book as part of a discussion on 'the law of small numbers'; Kahneman acted admirably, though, in accepting and supporting the critique.</p><h2 id="before-you-know-it">Before You Know It</h2><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/07/image-15.png" class="kg-image" alt></figure><p>'Before You Know It: The Unconscious Reasons We Do What We Do' is written by John Bargh, the father of the notion of 'social priming', which has failed to replicate.  (Two of his most famous studies – that warm beverages make you act with warmth, and that read words associated with aging make you walk slower–  are now debunked.)  This book has also been covered by <a href="https://replicationindex.com/2017/11/28/bargh-book/">Ulrich Schimmack in an excruciating detailed, wonderful post</a>.  He estimates the replicability of the cited studies from each chapter based on their power:</p><blockquote>The more important question is how many studies would produce a statistically significant result again if all 400 studies were replicated exactly.  The estimated success rate in Figure 1 is less than half (41%). Although there is some uncertainty around this estimate, the 95% confidence interval just reaches 50%, suggesting that the true value is below 50%.  There is no clear criterion for inadequate replicability, but Tversky and Kahneman (1971) suggested a minimum of 50%.  Professors are also used to give students who scored below 50% on a test an F.  So, I decided to use the grading scheme at my university as a grading scheme for replicability scores.  So, the overall score for the replicability of studies cited by Bargh to support the ideas in his book is F.</blockquote><p>The best performing chapters are chapter 10, where 62% of the studies should replicate if done exactly, and chapter 6, where this is true of 57% of the studies.  All other chapters have estimated replicability of &lt;50% (a low of 13% appears in Chapter 3).</p><p>Keep in mind this is just based on the statistical power of the study– we're estimating if the same study was repeated exactly, how likely it would retain a 'significant' result.  This is different than whether <em>an effect is real and meaningful or not</em>.  Other studies can be done with much larger sample sizes (what we typically mean by <em>replication </em>in social science), and many of the concepts may well be disproven already as a result.  Marginal, meaningless effects can replicate 'significantly' if the study had a large enough sample size to capture them.  Schimmick's critique is really about methodology– the trust we can put in these specific set of papers agnostic of other information.</p><h2 id="glow-kids">Glow Kids</h2><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-8.png" class="kg-image" alt></figure><p>I've written <a href="https://blog.davidbramsay.com/glow-kids-and-the-crisis-in-pop-psychology/">a more extensive essay on this particular book</a>; Kardaras mistakenly cites some purely fabricated research within it (the study's author is the subject of a criminal complaint for fraud). It's unfortunate that this 'research' was featured in <em>Glow Kids– </em>a book whose premise I really appreciate. </p><h2 id="behave">Behave</h2><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/07/image-18.png" class="kg-image" alt></figure><p>Sapolsky was prominently featured on <a href="https://whyevolutionistrue.com/2017/07/08/a-great-radiolab-show-robert-sapolsky-on-why-we-dont-have-free-will/">a Radiolab episode questioning free will</a>; in it, he cited <a href="https://jasoncollins.blog/the-effect-is-too-large-heuristic/">a famously incorrect study on judges</a>.  This study also features prominently in his book, <em>Behave</em>.  The study suggests that hunger drives favorable parole verdicts down from <em>65% </em>after a break to <em>0%</em> just before one.  Sapolsky interprets:</p><blockquote><em>What's interesting about that? Number one, the biology makes perfect sense. What are you doing there when you are a judge trying to judge somebody from a completely different world from you to reach a point of deciding. There's mitigating fact. You're trying to take their perspective. You're trying to think about the indirect ways that let -- you're using your frontal cortex. And when you're hungry and your frontal cortex isn't working as well, it's easier to make a snap emotional judgment: this person's rotten. The second amazing thing which exactly addresses this issue is, you get that judge two seconds after they made that decision, you sit him down at that point and say "Hey, so why did you make that decision?" And they're gonna quote, I don't know, Immanuel Kant or Alan Dershowitz at you. They're going to post-hoc come up with an explanation that has all the pseudo-trappings of free will and volition, and in reality it's just rationalization. It's totally biological.</em> </blockquote><p>Of course, it turns out <a href="https://nautil.us/blog/impossibly-hungry-judges">the cases were <em>actually ordered</em> by severity of offense</a> and this idea that hunger drives a judge's decision has no basis in reality.  <a href="https://nautil.us/blog/impossibly-hungry-judges">Daniel Lakins points out the absurdity of this finding– </a>"<em>if hunger had an effect on our mental resources of this magnitude,</em>" he writes, "<em>our society would fall into minor chaos every day at 11:45</em>".</p><h2 id="positivity">Positivity</h2><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/07/image-10.png" class="kg-image" alt></figure><p>Positivity is famous as the book in the positive psychology movement, written by Barbara Erickson.  She has faced significant criticism for her views on positivity– her famous suggestion that a <a href="https://en.wikipedia.org/wiki/Critical_positivity_ratio">golden 2.9013 to 1 ratio of positive to negative emotions</a> separates those that flourish from those that languish was publicly debunked, and forced a retraction of a chapter of this book; despite that, Fredrickson continues to defend the premise that there <em>is </em>a tipping-point ratio of positive to negative emotions that tips people between the two outcomes (and <a href="https://arxiv.org/abs/1409.4837">continues to be criticized for it</a>).</p><p>Humanistic psychologists have been skeptical of the oversimplifications of positive psychology, and Fredrickson's conception of fulfillment is one major example.  More positive feelings don't seem to directly lead to fulfillment.  Many deep thinkers would actually suggest that voluntary, efficacious self-sacrifice is quite important instead.  </p><p>It is quite unlikely that all humans across all stage in life and all contexts will obey a rule.  It also directly contradicts pretty well-established notions of hedonic adaptation– that we <em>don't </em>spiral when confronted with a temporary state of bad or good emotions. People are actually quite robustly adaptive.</p><p>These kind of simplifications– without empirical data to support them– push positive psychology further from its intended purpose, and do more harm than good for people that are in search of meaning in their lives.  </p><h2 id="stumbling-on-happiness">Stumbling on Happiness</h2><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/07/image-11.png" class="kg-image" alt></figure><p>I really enjoyed Dan Gilbert's <em>Stumbling on Happiness; </em>Gilbert is a talented and well-read author, and I think his work stands out as an interesting synthesis of many ideas.  Unfortunately, Gilbert falls into many of the same traps as the others; his reporting of underlying research seems fraught with mistakes and difficult to trust.  Gilbert incorrectly cites the debunked social priming research, and particularly mischaracterizes research on agency and alexithymia in ways that are pretty detrimental to his argumentation.  <a href="https://blog.davidbramsay.com/dan-gilbert-and-happiness/">I've written more extensively about the issues I have with some of his characterizations of the underlying research here</a> if you're interested to learn more.  </p><h2 id="counterclockwise">Counterclockwise</h2><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/07/image-16.png" class="kg-image" alt></figure><p>Harvard's Ellen Langer features in Dan Gilbert's work as proof that <em>agency</em> matters for longevity, though her research doesn't seem to offer empirical support for that conclusion (as discussed in <a href="https://blog.davidbramsay.com/dan-gilbert-and-happiness/">my post on Stumbling on Happiness</a>).  Some of Langer's work has been called out in quite a bit of detail by <a href="https://sciencebasedmedicine.org/eminent-harvard-psychologist-mother-of-positive-psychology-new-age-quack/">James Coyne</a> and <a href="https://statmodeling.stat.columbia.edu/2019/08/17/what-can-be-learned-from-this-study/">Andrew Gelman</a>.  Langer's book also suffers from several small references to debunked social priming research.</p><p>Instead of rehashing the mistakes shared with other examples, we'll take a look at the thrust of the book, which focuses on its namesake– the Counterclockwise Study.</p><p>In 1979 Harvard's Ellen Langer took sixteen men in their 70s and 80s, and had them go on a retreat where the environment and media were designed to exactly replicate life 20 years earlier. Half of the men were told to live <em>as if it was the current day– </em>they discussed events from two decades prior as if they were unfolding and could not reference anything in their lives after the events.  The other half were told to reminisce with each other about the earlier era they were re-experiencing.  </p><p>She reports that both groups had hearing, memory, height, weight, gait, posture, and grip strength improvements; they both also 'looked younger' as judged by blinded raters.  The experimental group "<em>showed greater improvement on joint flexibility, finger length (their arthritis diminished and they were able to straighten their fingers more), and manual dexterity. On intelligence tests, 63 percent of the experimental group improved their scores, compared to only 44 percent of the control group.</em>"  This study led to a four hour BBC mini-series in 2010 <a href="https://www.bbc.co.uk/programmes/b00tq4d3">called <em>The Young Ones</em></a>, where they repeated the same experiment with six older celebrities.</p><p>Unfortunately, it's hard to trust these results outside of what intuition confirms.  The testing suite appears extensive, the sample size is terribly small, and the participants are certainly aware of the desired results (<em>and</em> appreciative of the researchers for their experience).  Given the number of tests, the high variability of small samples, and the demand characteristics, it seems very likely at least a handful of results would come back 'significant' regardless of real underlying causality.</p><p>Despite that, the only quantitative difference– 63% vs 44% of participants improving in intelligence– merely <em>sounds</em> impressive.<em>  </em>This is a manipulative framing of the underlying data; it's better described as <em>5 of 8</em> in one group vs. <em>4 of 9</em> in the other. (<em>I assume; earlier Langer states 8 people are in the both groups, but 44% of 8 gives 3.5 people.  One of the numbers must be incorrect– another red flag.</em>)   A difference<em> </em>in one person for groups of this size is not meaningful.  Over the course of the (at least) twelve different tests administered, several will have this difference.</p><p>There are no peer reviewed publications associated with this study, either; for all the effort, it is described only briefly in her book, bracketed with very few interpretive statements.  It seems like a cautious resignation to the minimal empirical value of the study's quantitative data.  However, on page 167, she makes one overstatement: </p><blockquote><em>The most dramatic example of language acting as placebo can be found in the counterclockwise study. The study used language to prime the participants, asking the elderly men at the retreat to speak about the past in the present tense. With language placing the experimental groups’ minds in a healthier place, their bodies followed suit.</em></blockquote><p>To be clear, that isn't to say that a meaningful difference between groups doesn't exist, only that the statistics and anecdotes derived from such a severely underpowered study couldn't possibly have captured it.</p><p>The question remains– does <em>acting </em>like your younger self make a difference compared to <em>reminiscing</em>?  Luckily, we can expect some rigor to end this story.  Langer pre-registered <a href="https://clinicaltrials.gov/ct2/show/NCT03552042">a large scale replication</a> to be carried out in 2020 in collaboration with an Italian team.  Unfortunately, 'gathering a hundred 80 year olds in Italy' is the fastest way to get your research shut down during a pandemic– hopefully we'll see some results once things return to normal.</p><p>Perhaps the biggest questions are the ones left unasked.  It seems quite obvious that bringing a group of dependent, isolated 70-80 year olds together on a week long nostalgia trip will revitalize them.  Even with a small sample we can make that inference. The <a href="https://www.nytimes.com/2014/10/26/magazine/what-if-age-is-nothing-but-a-mind-set.html">New York Times</a> described the BBC participants as "<em>apparently rejuvenated... [they] walked taller and indeed seemed to look younger. They had been pulled out of mothballs and made to feel important again.</em>"</p><p>How much of that is because they are have <em>structured social time with peers</em>?  How much because they are breaking their routine and <em>experiencing something novel together</em>?  How much because the environment and media are all <em>visceral, nostalgic primes</em> that reawaken latent, more vital and youthful self-concepts?  How much because they are the <em>subject of an important and meaningful scientific study</em>?  How much did it matter at all?</p><p>Aging is both a psychological and biological shift.  It's certainly true that the psychological component is powerful, but it seems unlikely that the benefits here come from a re-conception of oneself as explicitly younger or a change of verbiage from past to present tense.  In reality, most of the benefit will come from a renewed, socially-reinforced sense of feeling of purpose, agency, and novelty.</p><h2 id="pre-suasion">Pre-suasion</h2><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/08/image-1.png" class="kg-image" alt></figure><p>I've discussed some of Cialdini's <a href="https://blog.davidbramsay.com/do-clouds-make-you-buy-furniture/">misinterpretations before</a>, and have a detailed description of some of the mistakes in <a href="https://blog.davidbramsay.com/cialdinis-presuasion/">this particular book here</a>.  Conceptually, Cialdini in this text broadly argues that very subtle (sometimes subconscious) interventions have a large effect on our decision making.  This idea sits at the core of the replication crisis in social psychology.  While there are some good insights in the book, they stand alongside several examples of weak empirical data.</p><h2 id="misbehaving">Misbehaving</h2><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-16.png" class="kg-image" alt></figure><p>One of the most popular examples of the 'power of defaults' in behavioral economics comes from <a href="http://www.dangoldstein.com/papers/DefaultsScience.pdf">Johnson and Goldstein's 2003 Science paper 'Do Defaults Save Lives?'</a>, which is frequently used in a misleading way to suggest that for complex decisions, we get overwhelmed and will simply choose the default<em>.  </em>This is a misinterpretation of the data– it turns out that many places have presumed consent, or don't take the consent seriously.  Meta-analyses have not show the large effects typically attributed to the phenomena.</p><p>Thaler gives a full and reasonable treatment to the topic of opt-in and opt-out in his book, clearly aware of the problems I listed above.  He settles on a policy suggestion of 'mandated choice' for organ donation, not at all<em> </em>relying on a 'default effect'.  </p><p>Despite the careful thinking, he states that "<em>[t]he findings of Johnson and Goldstein’s paper showed how powerful default options can be</em>”, suggesting that these misleading results prove that the default choice effect is <em>still</em> <em>powerful and important</em>, just not <em>binding</em> for this specific case. </p><p>While this is a relatively minor problem, it's a misattribution worthy of note.  It perpetuates a conception of human decision making that I believe is flawed<em>– </em>that very subtle changes drive important and complex decisions.  The default effect may be real, but <em>not when meaningful outcomes are on the line</em>.  This concept applies only when a decision is low-stakes and preferences are weak.</p><h2 id="generalizing-the-evidence">Generalizing the Evidence</h2><p>These are some of the most famous examples of popular science books, written by very reputable and trustworthy academics (two of whom have Nobel Prizes and whom I admire greatly).  In general, their mistakes are evidence of a lack of rigor in analyzing their sources.  (To be fair, <a href="https://www.nicebread.de/whats-the-probability-that-a-significant-p-value-indicates-a-true-effect/">sources are hard to trust given the state of the replication crisis in the academic literature</a>.)  Mistakes range from relatively minor and sporadic (for Thaler) to quite major (entire books for Kahneman and Walker).  </p><p>Unfortunately, any error<em>– </em>whether sporadic or frequent– undermines a text's trustworthiness.  We're forced to fact check works of popular science instead of taking them at face value.  </p><p>It seems the available evidence points heavily towards a presumption of guilt in popular science accounting, at least as far as social psychology is concerned.  Until you've vetted an author, these books are better conceptualized as pointers to a collection of potentially interesting primary sources.  Pulling signal from the noise requires statistical fluency, patience, and rigor.</p>]]></content:encoded></item><item><title><![CDATA['Glow Kids' Cites Bad Research]]></title><description><![CDATA[<p>I recently finished the pop-science book 'Glow Kids', which is a reasonably compelling look at the damage screen culture is having on children.  Unfortunately, the author was fooled by some fraudulent research.  </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-8.png" class="kg-image" alt><figcaption>The Cover of 'Glow Kids'</figcaption></figure><h3 id="glow-kids-and-screens-themselves">Glow Kids and Screens Themselves</h3><p>I think about quite a bit is the</p>]]></description><link>https://blog.davidbramsay.com/glow-kids-and-the-crisis-in-pop-psychology/</link><guid isPermaLink="false">60db4284c66b5c391c16258c</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Tue, 29 Jun 2021 17:13:31 GMT</pubDate><content:encoded><![CDATA[<p>I recently finished the pop-science book 'Glow Kids', which is a reasonably compelling look at the damage screen culture is having on children.  Unfortunately, the author was fooled by some fraudulent research.  </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-8.png" class="kg-image" alt><figcaption>The Cover of 'Glow Kids'</figcaption></figure><h3 id="glow-kids-and-screens-themselves">Glow Kids and Screens Themselves</h3><p>I think about quite a bit is the separation of hardware and software. Most research really focuses on the software level– app design and content.  It's definitely true that <a href="https://www.darkpatterns.org/">dark patterns</a> and design hacks are making the software landscape an addictive, cognitive landmine for all of us. </p><p>However, from the pioneering work of Michael Posner (the world's leading authority on human attention), we know that <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4345129/">changes in motion and luminance drive the reorientation of our attention at a neurological level</a>.  Screens are an interface that embodies these two principles; by nature, they reorient and distract us from anything else that we might be focusing on.  There are real problems with <em>the screen itself</em>, and I was excited to read a book that seemingly focused on this issue.</p><p>The book doesn't really focus on screens in themselves, but rather 'screen culture'.  Within it, Dr. Kardaras includes interesting points that are worthy of further exploration.  He makes a compelling case that the committing realistic, simulated violent acts might have negative repercussions (suggesting that <a href="https://en.wikipedia.org/wiki/Christopher_Ferguson_(psychologist)">Dr. Chris Ferguson</a>, the main proponent that video games don't influence violence, fails to account for mediating variables in his epidemiological research).  He dovetails this point nicely with his description of his video game addiction counselling work.  The stories of dissociation and anger that accompany severe video game addiction are quite sobering.  </p><p>He also weaves a compelling story about the infiltration of screens into the classroom despite <em>all available evidence </em>pointing to the fact that it is detrimental for student outcomes.</p><h3 id="slightly-questionable-reporting">Slightly Questionable Reporting</h3><p>Unfortunately it's hard to take the book completely at face value.  He treats the possible health effects of electromagnetic radiation (EMF) with a dramatic flair than I believe is unwarranted.  We don't have much strong data on this subject since ubiquitous EMF is so recent, and Dr. Kardaras is correct that <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5952570/">we know that cell phones warm the tissue in the brain</a> and that some evidence shows they <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5417432/">increase the risk of brain glioma</a>.  This finding is controversial though; while there is <a href="https://www.cnn.com/2018/05/02/health/brain-tumors-cell-phones-study/index.html">some epidemiological support for it</a>, there have been large studies that <a href="https://www.mayoclinic.org/healthy-lifestyle/adult-health/expert-answers/cell-phones-and-cancer/faq-20057798">show no effects</a>, and the jury is still very much out.  </p><p>As Kardaras reports, a team of scientists working on behalf of the WHO <a href="https://www.iarc.who.int/wp-content/uploads/2018/07/pr208_E.pdf">did label EMF a Group 2B 'possible carcinogen' in 2011</a>.  But this label is best understood as 'can't be definitively ruled out<em> </em>for causing cancer'.  For a long time, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3142790/">coffee also had the 2B designation</a> (it was recently downgraded to Group 3); pickles and aloe vera still reside there.  Red meat has a worse designation– 2A; processed meat falls in category 1.  We don't have clear data either way, but that ambiguity means that even if a cancer risk is real, we can be confident it will be small.  As far as we know, EMF is as dangerous as pickles.</p><p>Kardaras also correctly mentions that there are researchers associated with Harvard MGH <a href="https://pubmed.ncbi.nlm.nih.gov/24113318/">investigating the link between EMF and autism</a>.  The primary author, <a href="https://en.wikipedia.org/wiki/Martha_Herbert">Dr. Martha Herbert</a>, is a neurologist and an assistant professor at Harvard Medical School; it seems fair to call her 'controversial', in that her views on causal forces driving Autism are not mainstream.  She has a very <a href="https://www.pbs.org/newshour/show/autism-now-dr-martha-herbert-extended-interview">holistic interpretation of the environmental drivers</a> of autism; while she's not an anti-vaxxer, she does leave the door open for a causal pathway from vaccine to autism expression in <a href="https://www.pbs.org/newshour/show/autism-now-dr-martha-herbert-extended-interview">her PBS NewsHour interview</a>.  Even with these things in mind, she herself would not lay autism at the feet of EMF.</p><p>I don't think these opinions are fairly contextualized or nuanced in 'Glow Kids', which is a shame.  I expect it will stir an urgent sense of fear in its audience over a risk that is very, very mild.  </p><h2 id="a-bad-citation">A Bad Citation</h2><p>The most interesting study that Dr. Kardaras cites in his book implies that our perceptual acuity has been measurably decreasing over decades— we can distinguish fewer shades of color, fewer sounds, etc– due to screen culture.  He writes:</p><blockquote>According to longitudinal research conducted by the German Psychological Association (GPA) in association with the University of Tubingen over a 20-year period, we are shockingly losing sensory awareness at a rate of 1 percent a year.</blockquote><blockquote>This research began in the 1960s after teachers working at the university noticed that, after the proliferation of television viewing in the 1950s, students seemed to suffer from a severe reduction in their sensory awareness; they appeared less alert than previous generations to information from their surrounding environment, which, in turn, was adversely affecting their ability to learn. The university then partnered with the GPA in order to quantify this phenomenon.</blockquote><blockquote>The researchers conducted sensory tests on 400 undergraduates per year over that 20-year period—a total of 8,000 subjects. The results shocked even the researchers; each successive cohort was slightly less sensitized than the prior cohort: “Our sensitivity to stimuli is decreasing at a rate of about one percent a year,” their report stated. </blockquote><blockquote>According to pioneering, visionary educator Joseph Chilton Pearce, author of Magical Child (1992), who wrote extensively about the study in his 2002 book, The Biology of the Transcendence: “Fifteen years ago people could distinguish 300,000 sounds; today many children can’t go beyond 100,000 . . . Twenty years ago the average subject could detect 350 different shades of a particular color. Today the number is 130.”</blockquote><p>This section goes on for a while longer.  It's an incredible claim, and the most exciting one in his book.  The citation comes from <a href="http://www.waldorflibrary.org/images/stories/Journal_Articles/RB2206.pdf">this article by Michael Kneissle</a> of the <a href="https://www.waldorfresearchinstitute.org/">Waldorf Research Institute</a>.  </p><p>The linked paper quotes the study behind these numbers– a study conducted by psychologist Henner Ertel.  I searched for any other<em> </em>research that would back up this claim and came up empty-handed; all roads lead back to this one article featuring Henner Ertel.</p><p>Unfortunately, there is no trace of Ertel in the peer-reviewed literature.  His name does appear in two other places, though. It first appears in an article called ‘<a href="https://www.zeit.de/zeit-wissen/2008/04/Institut-fuer-Volksverdummung/seite-2">Sold for Stupid</a>’, an article about Ertel's GRP– a fake, pseudoscientific research institute with two employees, known for mass producing fraudulent, sensational headlines on a wide range of research topics.  The other result is <a href="https://www.psiram.com/de/index.php/Institut_f%C3%BCr_Rationelle_Psychologie">Ertel's entry on a pseudoscience watchdog page</a>.</p><p>Ertel <a href="https://www.zeit.de/zeit-wissen/2008/06/editorial-newsletter">has been the subject of a criminal complaint for his academic fraud</a>.  It further appears this sham psychologist <a href="https://www.zeit.de/online/2008/30/zeitwissen-replik">threatened ‘Zeit Online’ with a defamation suit</a>, and has a history of threatening scientists that question his validity with lawsuits of their own.  </p><p>Ertel's fake organization is called the Rational Psychology Association (Gesellschaft für Rationelle, Psychologie, GRP). His 'relationship' with the University of Tuebingen comes from a direct quote by Ertel himself in the linked article<em> </em>above. As for the other author Dr. Kardaras cites directly– Joseph Pearce– his book contains the same paragraph almost verbatim, but with no citations.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-9.png" class="kg-image" alt><figcaption>The other book cited for this point. On page 146 (end of Chapter 5) is the exact quote from 'Glow Kids' with scant more context. It unfortunately comes with no citation.</figcaption></figure><p>Zeit Online reports that the main target for Ertel's fraud has been Men's Health magazine.  It seems Ertel managed to fool the Waldorf Institute, Joseph Chilton Pearce, and Dr. Kardaras as well.  </p><h2 id="fin">Fin</h2><p>This citation is pretty unfortunate– when I clicked on the link for the article I was immediately suspicious.  Dr. Kardaras fell prey to motivated research.  It's too bad, because I <em>agree</em> with much of the thrust of the book, and Dr. Kardaras could have preserved most of his points without these bad citations.  Instead, his readers are forced to check the sources and approach the text with a healthy dose of skepticism.</p>]]></content:encoded></item><item><title><![CDATA[Never Say 'Statistically Significant' Again]]></title><description><![CDATA[<p>2019 was the year of revolt against 'statistical significance', with major critiques signed by hundreds of statisticians appearing across many major journals.  We're witnessing a fundamental change in how we evaluate and condense academic literature; a tectonic shift away from misapplied frequentist statistical techniques that have led to decades of</p>]]></description><link>https://blog.davidbramsay.com/statistically-significant-does-not-mean-significant/</link><guid isPermaLink="false">60cb8b08c66b5c391c161606</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Mon, 28 Jun 2021 20:00:51 GMT</pubDate><content:encoded><![CDATA[<p>2019 was the year of revolt against 'statistical significance', with major critiques signed by hundreds of statisticians appearing across many major journals.  We're witnessing a fundamental change in how we evaluate and condense academic literature; a tectonic shift away from misapplied frequentist statistical techniques that have led to decades of bad and poorly communicated science.  The concept of statistical significance has held back science for the last several decades and it's time to move forward. </p><h3 id="the-basics">The Basics</h3><p>When we evaluate a causal claim, we split our data into a control group and an intervention group– let's test whether <a href="https://www.statnews.com/2019/07/22/study-millions-should-stop-using-aspirin-for-heart-health/">aspirin prevents heart attacks</a> (p&lt;0.0001).  Our control group doesn't take aspirin, and our intervention group takes it daily; now we look at each individual's rate of cardiovascular events per year, by group (around 0.5% for both).  We assume both the intervention and control data are normally distributed, and we looking to see if data from the intervention group has a different mean than the control.  </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-5.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/06/image-5.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2021/06/image-5.png 1000w, https://blog.davidbramsay.com/content/images/size/w1208/2021/06/image-5.png 1208w"><figcaption>Individual Rates of Cardiovascular events are plotted along the x-axis; we then fit a Gaussian to each group. We can ask all kinds of common sense questions– i.e. what is the difference in means (effect size)? Can that difference be easily explained by variance of the distributions (Cohen's d)? How likely are we to observe yellow intervention data if we're drawing from the green control distribution (p-value)? How confident can we be about our estimate of each mean, given its variance and the sample size (standard error/confidence interval)?</figcaption></figure><p>Assuming we have two groups as Gaussians we are sampling from, we can then try to answer some common sense questions.  Given the data in the control group, we can estimate a mean and variance– how likely are we to draw the data we see in the intervention group from the control group distribution?  How confident can we be in our estimates of means and variances given a certain number of observations? </p><p>P-values quantify the likelihood that some observed intervention data is simply the result of randomness in sampling from the control distribution.  A <em>statistically</em> <em>significant </em>effect is a categorical distinction on top of that p-value– i.e. we call an effect 'statistically significant' when it would be unusual to observe the intervention data simply by drawing randomly from the control distribution, where 'unusual' is an arbitrary standard.</p><p>The term 'statistically significant' leads people to conflate <em>measurable</em> effects with <em>significant</em> effects.  'Statistical significance' is meant to convey that the data is suggestive of a true underlying relationship, <em>not</em> that it's a large enough effect to matter practically.  'Statistically significant' relationships are not necessarily important ones.</p><p>But the issues with the 'statistically significant' don't just end with its semantics.  Even as a way to understand whether data is giving insight into a real underlying relationship, the concept fails without proper context and understanding. Categorical distinctions like this are not useful in the literature and harmful to scientific progress.  The term is confusing and we should stop using it– and I'm not the only one that thinks so.</p><h3 id="what-people-are-saying">What People Are Saying</h3><p>Andrew Gelman– one of the best living statisticians– famously published a 2006 paper entitled "<a href="http://www.stat.columbia.edu/~gelman/research/published/signif4.pdf">The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant</a>" which showed "that even large changes in significance levels can correspond to small, nonsignificant changes in the underlying quantities."  In '<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3444174/">Using Effect Size– or Why the P Value is Not Enough</a>', authors Sullivan and Feinn start their piece by quoting two of the most influential statisticians of the modern era:</p><blockquote><em>Statistical significance is the least interesting thing about the results. You should describe the results in terms of measures of magnitude –not just, does a treatment affect people, but how much does it affect them.</em></blockquote><p><em>-<em>Gene V. Glass</em></em></p><blockquote><em>The primary product of a research inquiry is one or more measures of effect size, not P values.</em></blockquote><p><em>-<em>Jacob Cohen</em></em></p><p>The above statisticians are not alone; in 2016, the American Statistical Association officially published<a href="https://amstat.tandfonline.com/doi/full/10.1080/00031305.2016.1154108#.YNKiJpNKgcj"> a public statement</a> urging caution around the use of p-values.  Among many points about the lack of usefulness of p-values, one important point that a <strong>"<strong><em>p</em>-value, or statistical significance, does not measure the size of an effect or the importance of a result.</strong>"  </strong>They followed it with a 2019 special issue entitled "<a href="https://www.amstat.org/ASA/Publications/Q-and-As/TAS-Special-Issue-Call-for-Papers.aspx">Statistical Inference in the 21st century: A World Beyond p&lt;0.05</a>". The headline article has a section labeled "<strong><a href="https://www.tandfonline.com/doi/full/10.1080/00031305.2019.1583913#_i2">Don't Say Statistically Significant</a></strong>":</p><blockquote>The <em>ASA Statement on P-Values and Statistical Significance</em> stopped just short of recommending that declarations of “statistical significance” be abandoned. We take that step here. We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term “statistically significant” entirely. Nor should variants such as “significantly different,” “<em>p</em> &lt; 0.05,” and “nonsignificant” survive, whether expressed in words, by asterisks in a table, or in some other way.</blockquote><p>A <a href="https://www.nature.com/articles/d41586-019-00857-9">2019 editorial in Nature</a> followed this issue, summarizing the movement against 'statistical significance' in plain language.  With more than 800 signatories in the field, they "call for the entire concept of statistical significance to be abandoned."</p><p>When evaluating papers, it's much more important to understand the effect size and power, and put that effect in context of what it means with respect to a broader understanding of the subject.  Making sense of the literature in a world that statisticians have admitted is fundamentally flawed is a non-trivial exercise for the those of us without a stats degree.</p><h3 id="what-statistical-significance-really-means">What Statistical Significance Really Means</h3><p>'Statistically significant' is based on some arbitrary, probabilistic standard– i.e. the data suggests a measurement is unlikely to be the result of random chance.  Frequently we set this arbitrary point at 0.05– so if the p-value is  less than 0.05, we label a result as 'statistically significant'.  That means, if there is no real effect– if we assume our data was the result of random sampling– the odds of seeing this outcome is &lt;5%.  Every 20 times we run a test with the criteria p&lt;0.05, if there is <em>no underlying relationship</em>, we'd expect to hit this threshold once.  </p><p>Of course, this kind of thinking doesn't scale very well across many studies and many researchers.  If this was our only standard, we'd <em>expect </em>that for every 20 times someone tests a bogus theory, one result will 'confirm' (p&lt;0.05) a real effect when there isn't one.  If every scientist is focusing their effort on implausible theories, the literature will be dominated by false positives–  we'd especially expect <em>popular misconceptions</em> to have statistically significant results.  </p><p>We need data from <em>all of our attempts to study a hypothesis</em>, or else we can't contextualize any single attempt.  Since authors tend to publish their positive results and not their negative results (the so-called <em>file-drawer effect</em>), and publications reinforce this bias in the review process (<em>publication bias</em>), we're left with overwhelmingly spurious results in the literature using this simple pass/fail technique on an arbitrary p-value threshold.  </p><p>It's also very easy to inadvertently <em>p-hack</em>– if we test 20 theories on one set of data, we'd <em>expect </em>1 to be 'statistically significant' with p&lt;=0.05 even if no true relationships exists.  This kind of poor methodology was common in the social sciences for a long time (i.e. test 20 theories on the data you collect, but only publish the one that 'hits' with p&lt;=0.05).  It's very natural, when an initial hypothesis fails, to consider other explanations that the data could reveal.  Fighting this urge, and rigorously testing only your a priori hypotheses, can be very unintuitive to people with poor statistical literacy.  In my opinion, we still do not train people against this rigorously enough.  Selectively publishing the details of this kind of approach has dire consequences for the overall trustworthiness of scientific literature.   </p><p>Beyond the pitfalls of publishing fair and accurate p-values, it's still common to misunderstand them conceptually.  It's a subtle but <em>very important </em>distinction to conclude 'repeating this experiment, you'd expect this or larger divergences 5% of the time simply due to randomness' (the correct interpretation) as opposed to 'the odds that my hypothesis is wrong is 5%' (terrible).  The American Statistical Association reiterates: '<strong><strong>By itself, a <em>p</em>-value does not provide a good measure of evidence regarding a model or hypothesis</strong>.'  </strong>They are very clear that the existence of an effect or association should never be assessed on p-values alone.  If that seems unintuitive to you, don't worry, <a href="https://library.mpib-berlin.mpg.de/ft/gg/GG_Mindless_2004.pdf">86% percent of statistics teachers also get this wrong</a>.  (The linked paper, 'Mindless Statistics' by Gerd Gigerenzer, is an excellent and sobering indictment of the modern practice of statistical techniques.)  Gigerenzer has also experimentally shown that <a href="https://journals.sagepub.com/doi/full/10.1177/2515245918771329">40% of psychology professors gravely misunderstand what statistical significance means</a>.</p><p>In a vacuum, moving from a p-value to an estimate in the odds of a hypothesis being 'correct' will depend on the effect size and your prior (how likely we believe the hypothesis is of being true).  In the broader scientific context, this kind of result would suggest further exploration– the odds of a correct hypothesis would then be revised by considering several studies, each with a different p-value that captures some underlying notion of uncertainty. </p><p>If we wanted to comment on the odds that a hypothesis is wrong given a p-value, we need to calculate the positive predictive value (PPV) as <a href="https://www.nicebread.de/whats-the-probability-that-a-significant-p-value-indicates-a-true-effect/">explained (excellently) here by Felix Schonbrodt</a>.  Based on meta-analysis of these disciplines, we know roughly the likelihood that a study is true given a 'significant' result (26-39% for social science, based on statistical analysis and further corroborated by replication studies).  Incorporating this kind of information allows us to guess the likelihood of truth conditional on the information we have available in a Bayesian way.</p><p>All of this makes the p-value a remarkably difficult thing to interpret.  How can we tell whether a p-value is meaningful when there are is so much bad practice centered around them?</p><h3 id="the-heroes-we-need">The Heroes we Need</h3><p>Fortunately, there are ways to understand whether p-values actually mean something.  Whenever I have a statistical question, I turn to my favorite sources on the subject– <a href="https://statmodeling.stat.columbia.edu/">Professor Andrew Gelman's Blog</a> and Professor <a href="https://replicationindex.com/">Ulrich Schimmack's Blog</a>.  </p><p>As we saw above, it's possible to incorporate some knowledge of the meta-statistics of a discipline to make a quick assessment of the predictive value of a given p-value without further information.  But there is a lot more to consider– the same p-value across different contexts can have very different meanings.</p><p>Professor Ulrich Schimmack is the best person to look towards for detailed advice on this subject.  In his excellent blog post "<a href="https://replicationindex.com/2021/01/15/men-are-created-equal-p-values-are-not/">Men Are Created Equal, P-Values Are Not</a>", he discusses how you can analyze a researcher's p-value distribution and estimate their 'file-drawer rate', the probability of p-hacking, and the likelihood of replication.  These kind of analyses take into account the power of a researcher's studies (which we'll talk about in a moment).  It's also possible to look at meta-studies and p-score distributions within publications to assess publication bias.  </p><p>Professor Schimmack takes it one step further with his <a href="https://replicationindex.com/2021/01/19/personalized-p-values/">"Personalized P-Value"</a> post.  Beyond bad statistical practices (like file-drawer/p-hacking), different researchers can also have different practices– i.e., they can analyze and prove obviously correct hypotheses with a high hit rate, or they can look for relatively obscure and surprising insights with a low hit rate.  </p><p>Imagine two researchers– one that investigates the size of obvious effects, and one that's looking for counterintuitive insights.  They both perform 20 experiments; the first researcher finds 19 'statistically significant' results with p&lt;0.05, while the second finds only one. If we look at any one of the 'statistically significant' results, can we make a guess about whether one is more likely to be true based on the researcher? </p><p>While both approaches are valid and valuable, the same p-value conveys a <em>very different</em> 'likelihood of revealing a real underlying effect'.  Luckily, both of these research strategies show up in their p-value distributions. </p><h2 id="how-to-read-social-science-papers">How to Read (Social Science) Papers</h2><p>What does this mean for how we read a paper?  I'm going to focus on social science (my area of interest), but the same applies for other fields that struggle epistemologically (medicine, neuroscience, and machine learning all come to mind– take at look at the very enlightening '<a href="https://arxiv.org/abs/1711.10337">Are All GANs Created Equal?</a>' from some insight on how this applies to machine learning research).  </p><p>Sadly, when it comes to making claims about social psychology, at the moment <a href="https://fantasticanachronism.com/2020/09/11/whats-wrong-with-social-science-and-how-to-fix-it/">you're better off guessing</a> (or asking your friends) than trusting the literature.  <a href="https://www.theatlantic.com/science/archive/2018/08/scientists-can-collectively-sense-which-psychology-studies-are-weak/568630/">Prediction markets do a great job predicting replication</a>; check out this sobering chart from <em>Nature Human Behavior</em> on social science articles published in the top impact journals:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-6.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/06/image-6.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2021/06/image-6.png 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2021/06/image-6.png 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2021/06/image-6.png 2400w"><figcaption>Fig 4. from <a href="https://www.nature.com/articles/s41562-018-0399-z">Evaluating the replicability of social science experiments in <i>Nature</i> and <i>Science</i> between 2010 and 2015</a>. Yellow studies failed to replicate, while blue did; users were polled and a betting market were used to determine consensus from average people. A major caveat, of course, is that we're still using a notion of 'statistically significant' here to define 'successful replication', which I've spent this article arguing is completely untrustworthy (and it is), and a more detailed analysis is necessary beyond this chart. Despite the hard cringe, its point has a lot of evidence to back it up.</figcaption></figure><p>If you can find a scientific prediction market, that's actually a great place to check your instinct about a paper.  <a href="https://www.socialsciencespace.com/2020/08/using-prediction-markets-to-forecast-replication-across-fields/">DARPA is actually funding such a project</a>.</p><p>It turns out we all spend quite a bit of our time observing human behavior.  You should have strong priors in social science, and in the absence of a consensus mechanism, you should trust your intuition.  Moreover, we established earlier that less than 40% of 'significant' studies replicate, <a href="https://fantasticanachronism.com/2020/09/11/whats-wrong-with-social-science-and-how-to-fix-it/">regardless of journal</a> (and that number <a href="https://www.theatlantic.com/science/archive/2018/08/scientists-can-collectively-sense-which-psychology-studies-are-weak/568630/">is closer to 26% for social psychology</a>)– so your prior for the truth of a paper should be quite low (a 'significant' result in a paper should bias you towards believing it's <em>more likely false </em>than if you hadn't encountered it in the literature).  This trend applies regardless of the year (but hopefully that will change!) <strong>*</strong></p><p>Furthermore, inaccurate studies are <a href="https://www.sciencemag.org/news/2021/05/unreliable-social-science-research-gets-more-attention-solid-studies">cited significantly more on average, and retractions don't seem to change citation behavior</a>.  <a href="https://www.nature.com/articles/s41562-019-0787-z">Meta-analyses suffer from similar problems</a>.  Science journalism is very <a href="https://pubmed.ncbi.nlm.nih.gov/28222122/https://pubmed.ncbi.nlm.nih.gov/28222122/">biased</a> and <a href="https://medium.com/i-data/misleading-with-statistics-c63780efa928">misleading</a> in what they report.  As anti-scientific (and horribly depressing) as it is, our simplest and best first-order heuristic is our gut.  </p><p>But all is not lost.  There is <em>real and useful </em>information in the literature if we work hard at extract it.  To do so requires a search for replication attempts and meta-analyses (<a href="http://www.socialsciencesreplicationproject.com/">1</a>, <a href="https://science.sciencemag.org/content/349/6251/910">2</a>, <a href="https://retractionwatch.com/">3</a>) supplemented with general background research.  Many big ideas <em>have </em>been scrutinized, and there are great blogs and journals that focus on this topic– I highly recommend checking <a href="https://statmodeling.stat.columbia.edu/">Andrew Gelman's blog</a>, <a href="https://replicationindex.com/">Ulrich Schimmack's blog</a>, <a href="http://steamtraen.blogspot.com/">Nick Brown's blog</a>, <a href="https://retractionwatch.com/">Retraction Watch</a>, <a href="http://datacolada.org/">Data Colada</a>, <a href="https://www.bitss.org/">Initiative for Transparency in the Social Sciences</a>, <a href="https://osf.io/ezcuj/wiki/home/">Brian Nosek's Center for Open Science (OSF) and their Reproducibility Project</a>, and the <a href="https://replicationnetwork.com/tag/retraction-watch/">Replication Network</a>.  You might also skim the writings of <a href="http://daniellakens.blogspot.com/">Daniel Lakens</a>, <a href="https://www.chronicle.com/article/positive-psychology-goes-to-war">Jesse Singal</a>, <a href="https://www.gleech.org/psych?fbclid=IwAR0Rxj8WUBGJUDqINGrx77M8mYn901U4TP0C-wHBurtiN5agMCJFDHsOjLM">Gavin Leech</a>, <a href="https://fantasticanachronism.com/2020/09/11/whats-wrong-with-social-science-and-how-to-fix-it/">Alvaro de Menard</a>, or <a href="https://www.nicebread.de/p-curving-journals/">Felix Schönbrodt</a>.</p><p>There are also several specific concepts that have been discredited.  Simple background searches are imperative.  It's important to check the <a href="https://blog.davidbramsay.com/p/3cc487bd-abb9-4dae-9338-ec6e9dd14b55/retractiondatabase.org">Retraction Database</a> to see any notes; if a paper is modern and you can't find any red flags, statisticians like Andrew Gelman <a href="https://statmodeling.stat.columbia.edu/2014/02/24/edlins-rule-routinely-scaling-published-estimates/">recommend scaling all effect sizes down by an 'Edlin factor'–  between 1/2 and 1/100 of what is reported–</a> conditional on the methodology.</p><p>For a more in-depth analysis of an individual paper, we can also check Shimmack's <a href="https://replicationindex.com/2021/01/19/personalized-p-values/">rankings of the 400 most prominent social scientists</a> and <a href="https://replicationindex.com/?s=journals">rankings of the top 120 social science journals</a> to see how trustworthy an author/journal combination truly is.  His blog is also a gold mine of useful context.</p><p>If you really believe a topic is important and no one has done any fact-checking, you can also do it yourself.  Nick Brown offers a <a href="http://steamtraen.blogspot.com/2018/05/a-step-by-step-introduction-to-how.html">simple forensic tool for statistics called SPRITE</a> that attempts to back out underlying distributions from summary statistics.  Don't be afraid to reverse engineer the analysis.</p><p>Taking into account <em>all of this secondary information</em>, it's possible to critically examine the literature with a well informed prior, and to fairly contextualize the power and effect size of a given study.  <em>Most of our work for reading a paper in social science must be done <strong>at the level above</strong> an individual paper.  </em></p><p>Useful information is there for us with proper skepticism and scrutiny.  Great claims require great evidence.</p><h2 id="what-does-this-mean-for-our-research">What does this Mean for Our Research?</h2><p>So by now we've done our best to destroy the concept of the 'statistical significance' and to rethink our analysis of the existing literature; now how do we apply this to <em>our research</em>?  Below are a few best practices to apply to our work.</p><h3 id="report-several-measures-correctly-">Report Several Measures (Correctly)</h3><p>In terms of reporting results, we should report an effect size that has been normalized by the standard deviation (Cohen's D or Odds Ratio), a p-value (<em>without</em> any additional commentary), and a confidence interval (which takes into account the variance normalized by the sample size to give us an estimate of standard deviation over distributions).  We should be careful and thorough in interpreting our results– keep the PPV in mind, and <em>do not</em> make the mistake of equivocating a p-value of 0.05 with a 95% chance that your hypothesis is correct <a href="https://www.nature.com/articles/d41586-019-00857-9">as more than half of studies do</a>.  </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-7.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/06/image-7.png 600w, https://blog.davidbramsay.com/content/images/size/w800/2021/06/image-7.png 800w"><figcaption>Source: <a href="https://www.nature.com/articles/d41586-019-00857-9">V. Amrhein <i>et al.</i> 'Scientists Rise Up Against Statistical Significance.' Nature 2019.</a></figcaption></figure><h3 id="create-an-explicit-a-priori-hypothesis-and-pre-register">Create an Explicit A Priori Hypothesis and Pre-Register</h3><p>Most importantly, we should be very explicit about exactly what hypothesis we will be testing <em>before </em>we start our study design and only test that hypothesis; and to make sure we're not wasting time with an underpowered study, we need to set out with an appropriate sample size.  You should then pre-register your study publicly, at a place like <a href="https://aspredicted.org/">Wharton's Credibility Lab</a> or <a href="https://www.cos.io/initiatives/prereg">The Center for Open Science</a>.</p><h3 id="conduct-an-a-priori-power-analysis-to-choose-your-n">Conduct an A Priori Power Analysis to Choose Your N</h3><p>If we make a mistake, it will be because we either find an effect that isn't real (type 1 error) or fail to capture an effect that is real (type 2 error).  Type 1 errors are clearly more catastrophic, but type II errors are also common in fields where effects are small and noisy, and recruiting large groups of participants is difficult.  When we do a power analysis, we can set a threshold for our probability of making a type 1 error (alpha) as well as a type 2 error (beta).  A study's <em>power</em> is the inverse of beta (1-beta)– i.e. the likelihood that you measured an effect if one does exists.</p><p>We should use a power calculator online (or a tool <a href="https://stats.idre.ucla.edu/other/gpower/">like UCLA's G*Power</a>) <em>before we do a study</em> to calculate N. We need to know the type of statistical test we're using, the alpha (usually 0.01 or 0.05), the expected effect size, and the power we want to achieve (usually 0.8-0.9).  With that information, we can calculate the sample size we'll need to measure an effect if one exists 80-90% percent of the time with a p-value of 1-5%. </p><h3 id="broaden-your-toolset">Broaden your Toolset</h3><p>Fisher– the 'inventor' of p-values and the father of modern statistics– was deeply flawed in his thoughts about our ability to reason causally.  The tide of Bayesian logic is swiftly overtaking the bad practices he instilled years ago.  Even he believed that this kind of analysis was basic and should only be applied where little contextual knowledge was available, and never with a specific p-value threshold.  Many of the defining statisticians that followed were very critical of the concept.</p><p>There are many other statistical tools available– Gigerenzer points to "descriptive statistics, Tukey’s exploratory methods, Bayesian statistics, Neyman–Pearson decision theory and Wald’s sequential analysis" in his <a href="https://library.mpib-berlin.mpg.de/ft/gg/GG_Mindless_2004.pdf">'Mindless Statistics' review</a>.  It's our job to get to know these tools, to spend time understanding the statistics, and to <em>never </em>take a result at face value without understanding the math behind it.  Science is contingent on putting in the hard statistical work.</p><p>And of course, don't forget the most important bit of advice:</p><h3 id="never-say-statistically-significant-again"> <strong><em>Never Say 'Statistically Significant' Again</em></strong></h3><p></p><p></p><p></p><p>* <em>The idea of replication itself is, in many ways, based upon this categorical definition of 'statistically significant'.  We can assess 'replication' by looking at a 'statistically significant' result and then forming a wholistic picture to see if that result is accurate-- in this way, there is a very real replication crisis.  It's almost impossible to know whether a given paper is accurate at face value and many results that are taken as 'true' are simply not.  </em></p><p><em>It's worth noting, though, that a 'significantly significant' study that then comes up in another, higher powered study as *not* 'statistically significant', doesn't mean the two studies are contradictory, and doesn't necessarily mean that someone utilized poor research methodology.  These two datasets can be analyzed and reconciled as a whole, more accurately sculpting an underlying stochastic model of the world.  We have to be careful when we talk about the 'replication crisis' as much as any other statistical concept.   </em></p>]]></content:encoded></item><item><title><![CDATA[Quick Note on Rationality]]></title><description><![CDATA[<p>Rationality has two definitions that I frequently come across– here's a quick note on some important concepts related to the idea.</p><h3 id="rationality-in-economics">Rationality in Economics</h3><p>In economics, the word 'rational' is used to describe self-consistency– i.e. if you prefer A over B over C, you prefer A over C.  You</p>]]></description><link>https://blog.davidbramsay.com/rational-has-two-meanings/</link><guid isPermaLink="false">60d4c4b4c66b5c391c161e34</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Thu, 24 Jun 2021 18:54:59 GMT</pubDate><content:encoded><![CDATA[<p>Rationality has two definitions that I frequently come across– here's a quick note on some important concepts related to the idea.</p><h3 id="rationality-in-economics">Rationality in Economics</h3><p>In economics, the word 'rational' is used to describe self-consistency– i.e. if you prefer A over B over C, you prefer A over C.  You have a consistent utility function and full knowledge.  <em>Homo economicus </em>is rational and self-interested.</p><p>This of course has evolved into 'bounded rationality' in decisions that take into account cognitive limits, costs, and biases.  This concept was coin by Nobel-prize winning economist Herbert Simon, and updated by Gerd Gigerenzer– much of the conversation describes heuristics for decision-making that might be more optimal when accounting for decision costs; the debate rages on whether these concepts are 'biases' and flaws in cognition that we need to correct for, or adaptive and optimal.  (For instance, the cognitive flaw described as the 'mere exposure effect' to prefer and select things that we recognize has been recast by Gigerenzer as the 'recognition heuristic', in which ignorant tennis fans select winners based off recognition better than those that follow the sport).</p><p>An ethical corollary to bounded rationality is rule utilitarianism– we should act in ways that conform to principles of behavior.  Even when a local decision is sub-optimal, reinforcing the moral principle is important for both building robust habits in the face of ethical quandaries and minimizing the decision costs.  A common example of this is stopping at red lights– if at every intersection we all got out and presented an argument about who has a greater ethical case for going first, the decision cost outweighs the ethical cost (even with the odd case that someone is rushing to the hospital).  Moral action is evaluated in line with principles of behavior– a kind of virtue ethics, instead of a deontological or consequentialist one. </p><h3 id="rationality-in-philosophy">Rationality in Philosophy</h3><p>In philosophy, 'rationalism' describes a belief in the supremacy of reason– reasoning and logic are the primary way we come to knowledge.  Kant suggested that there are analytic propositions which are true solely by virtue of semantics, like 'bachelors are unmarried', and early rationalists typically stood in opposition to empiricists, in their belief that principles of logic exist outside of experience and don't necessarily require empirical data to support them.  </p><p>It's important to draw some distinctions here in philosophy where (tacitly) empiricism and rationalism stand in contradiction.  An empiricist might say something like 'art is valuable because we perceive it as valuable'– you have to trust your perceptions <em>first </em>before you construct logic and argue from it.  A rationalist might say something like 'there is no articulation from the first principles of reason why art is valuable'– in this construction, reasoning about the value of life, emotion, positivity, etc is elevated <em>above</em> perceptual experience.</p><p>The modern intellectual world has drifted towards rationalism– we don't trust our perceptions (after all, we're full of cognitive biases and easily manipulated), and when push comes to shove we are nihilistic (nothing has provable meaning and therefore art certainly doesn't).  We might enjoy art, but we treat that enjoyment with suspicion; intrinsically meaningful pursuits are not in-and-of-themselves meaningful.  </p><p>I think it's pretty hard, however, to argue that pure reason or pure logic exists without empirical grounding– rational thought and abstraction <em>follow </em>from empirical experience.  If this is the case, it's completely illogical to apply the empirically derived rules of rationality– the subset of empirical experience that follows simple rules that we use to model the workings of objective reality– to the broader set of subjective  empirical experiences.  If we start by trusting our empirical experience, we should trust all of it.</p><p>Anti-rationalists typically reject the application of rational principles to human experience or behavior.  The idea that we can live happily in line with rational ideals, or that rationalism is good for the psyche, is a major point of critique.  </p><p>Regardless of the truth of rationalism (i.e. art has no value, in this example), we certainly must operate as if it does.  The more rationally you try to act the more you fight against your nature.  Few have put it more powerfully and succinctly than Dostoevsky in Notes from the Underground:</p><blockquote>In short, one may say anything about the history of the world--anything that might enter the most disordered imagination. The only thing one can't say is that it's rational. The very word sticks in one's throat. And, indeed, this is the odd thing that is continually happening: there are continually turning up in life moral and rational persons, sages and lovers of humanity who make it their object to live all their lives as morally and rationally as possible, to be, so to speak, a light to their neighbours simply in order to show them that it is possible to live morally and rationally in this world. And yet we all know that those very people sooner or later have been false to themselves, playing some queer trick, often a most unseemly one. Now I ask you: what can be expected of man since he is a being endowed with strange qualities? Shower upon him every earthly blessing, drown him in a sea of happiness, so that nothing but bubbles of bliss can be seen on the surface; give him economic prosperity, such that he should have nothing else to do but sleep, eat cakes and busy himself with the continuation of his species, and even then out of sheer ingratitude, sheer spite, man would play you some nasty trick. He would even risk his cakes and would deliberately desire the most fatal rubbish, the most uneconomical absurdity, simply to introduce into all this positive good sense his fatal fantastic element. It is just his fantastic dreams, his vulgar folly that he will desire to retain, simply in order to prove to himself--as though that were so necessary-- that men still are men and not the keys of a piano, which the laws of nature threaten to control so completely that soon one will be able to desire nothing but by the calendar. And that is not all: even if man really were nothing but a piano-key, even if this were proved to him by natural science and mathematics, even then he would not become reasonable, but would purposely do something perverse out of simple ingratitude, simply to gain his point. And if he does not find means he will contrive destruction and chaos, will contrive sufferings of all sorts, only to gain his point! He will launch a curse upon the world, and as only man can curse (it is his privilege, the primary distinction between him and other animals), may be by his curse alone he will attain his object--that is, convince himself that he is a man and not a piano-key! If you say that all this, too, can be calculated and tabulated--chaos and darkness and curses, so that the mere possibility of calculating it all beforehand would stop it all, and reason would reassert itself, then man would purposely go mad in order to be rid of reason and gain his point! I believe in it, I answer for it, for the whole work of man really seems to consist in nothing but proving to himself every minute that he is a man and not a piano-key! It may be at the cost of his skin, it may be by cannibalism! And this being so, can one help being tempted to rejoice that it has not yet come off, and that desire still depends on something we don't know? </blockquote><p></p>]]></content:encoded></item><item><title><![CDATA[The Lever Problem]]></title><description><![CDATA[<p>The modern academic and journalistic refrain calls out the same arguments– Twitter, Facebook, and Google are radicalizing our political dialog, supplanting our meaningful social connections, and driving us towards predictable consumerist self-medication.  The only solution is more oversight, increased regulation, and tighter control of our communication technologies.  This analysis has</p>]]></description><link>https://blog.davidbramsay.com/the-lever-problem/</link><guid isPermaLink="false">60538549c66b5c391c15fed0</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Tue, 22 Jun 2021 22:46:34 GMT</pubDate><content:encoded><![CDATA[<p>The modern academic and journalistic refrain calls out the same arguments– Twitter, Facebook, and Google are radicalizing our political dialog, supplanting our meaningful social connections, and driving us towards predictable consumerist self-medication.  The only solution is more oversight, increased regulation, and tighter control of our communication technologies.  This analysis has missed the forest for the trees.</p><h3 id="the-lever-problem">The Lever Problem</h3><p>Most people are familiar with Harvard's late B.F. Skinner– one of the pioneers of modern behaviorism– because of his <a href="https://en.wikipedia.org/wiki/Operant_conditioning_chamber">Skinner Box experiments</a>.  This technique houses small animals in a box with a lever, with rewards appearing on different schedules (sometimes reinforcing the lever pushes, and sometimes not) to see how rewards reinforce behaviors.  Famously, intermittent variable rewards cause animals to press the lever most frequently.  These reward structures are what we find in slot machines and gambling, as well as the design of social media websites.</p><p>A famous critique of this result came with the 1978 '<a href="https://en.wikipedia.org/wiki/Rat_Park">Rat Park</a>' experiment.  The researchers behind this test suggested that previous experiments on highly addictive rewards (in which pigeons and rats would voluntarily drink morphine-laced water until they died) missed a crucial component – these animals were isolated and caged in a way that would promote drug addiction.  The Rat Park researchers believed that Skinner's drug addiction was <em>not because drugs had <strong>been introduced</strong> to the animal environment, it was because normal social contexts had <strong>been removed</strong>.  </em>They showed that rats presented with the same morphine rewards, but in a normal context (a larger, shared cage with other rats where they could reproduce), accessed the morphine rewards significantly less.</p><p>The specific results of the Rat Park studies <a href="https://theoutline.com/post/2205/this-38-year-old-study-is-still-spreading-bad-ideas-about-addiction">are quite controversial</a> despite their fame – the original study has failed to replicate consistently and was passed over by more prestigious journals because of serious methodological errors.   It's unlikely that the damning results they report are actually accurate.  However, a <a href="https://journals.helsinki.fi/jrn/article/view/10.31885.jrn.1.2020.1318">recent paper</a> in the Journal for Reproducibility in Neuroscience argues that though the specific findings are not true, conceptually it has withstood the test of time. </p><p>For example, Lee Robin's <a href="https://ajph.aphapublications.org/doi/pdf/10.2105/AJPH.64.12_Suppl.38">1974 study</a> of Vietnam War veterans showed that, based on random drug tests, 10% abused narcotics before the war and 11% after the war, while that number skyrocketed to 43% while deployed (34% of those being heroin users).  The incredible low rates of heroin recidivism <em>after </em>use in Vietnam is truly remarkable (only ~1% of veterans re-addiction upon return). A <a href="https://pubmed.ncbi.nlm.nih.gov/27650054/">2017 follow-up study</a> suggested that this result has much more to do with social and psychological factors (citing disapproval from family, legal troubles, fear for health, and fear of addiction) instead of access (most veterans reported that it was easy to obtain heroin in their area, and 10% tried heroin upon return even though only 1% continued).  If we just consider those that had<em> </em>easy enough access to heroin and high enough motivation to use it again, a 1-in-10 re-addiction rate for these high risk, re-exposed users is astounding.  Heroin is a potent drug.</p><h3 id="desperate-people-push-levers">Desperate People Push Levers</h3><p><strong>If you give desperate people a lever, they will push it.</strong>  It's like presenting a person in serious pain with a button to control their morphine.  When morphine addiction inevitably starts to skyrocket, taking the morphine away doesn't solve the underlying problem– and people suffering from pain will be on the market for the closest morphine-replacement they can find. </p><p>The correct, more difficult way to solve the 'lever-pushing' problem is to address the underlying desperation.  Removing all Facebook-like levers is not realistic, nor does it address the desire for a Facebook lever.  There will (and should) be democratic, social technology in the marketplace.</p><p>Regulation seeks to remove or shape levers without considering the underlying socio-psychological landscape of the individual.  It covers the symptom, but doesn't fix the problem.  It obscures the true causes.  <strong>Regulation is an analgesic</strong>.</p><h3 id="addiction-is-real">Addiction is Real</h3><p>Addiction is complex.  There are many things that factor into our susceptibility to a given lever-reward– the genetic predisposition of the person, their underlying psychological needs, the broader environment, the barriers of access to a given lever, and whether the lever exists as an option at all.  </p><p>In the case of social media, it is very hard to regulate properly without threatening either a useful underlying technology, or an embodiment of that technology that is good and useful in other contexts.   That's not to say we don't have an interface problem with social media– right now, we're forced to carry it with us everywhere we go to engage with the world, and we should have a lot more agency over how we interface with our technology.  However, the effort we spend shaping levers and tweaking our barriers to access them ignores a more severe and important underlying problem.  </p><p>Those with intrinsically meaningful lives outside of technology are rarely seduced by it.  In my experience at the MIT Media Lab (and famously in the example of <a href="https://www.businessinsider.com/screen-time-limits-bill-gates-steve-jobs-red-flag-2017-10">Steve Jobs or Bill Gates</a>), those that understand the influences of technology minimize its role in their lives, successfully.  I've been amazed at how many of my friends at the one of the most technologically-focused labs in the world live minimalist, anti-tech lifestyles at home.</p><p>But the problems of social media– whether its a superficial stand-in for real social connection or it's a political machine that radicalizes deeply-held, identity-driven ideologies– are <em>symptoms</em> much more than <em>causes</em>.  </p><p>Our sense of intimacy, community, and trust has largely dissolved in post-industrial, post-WWII society– a trend which predates the internet.  Our desire for structured meaning, belief, and belonging– previously the domain of religion and war and local community– have been annexed by sound-byte politick.</p><p>These deep and real human needs will continue to be met somehow.  Unless we look towards individual psychology and address them at their core, we'll find ourselves regulating lever and lever away in a game of whack-a-mole.  In the process, we risk damaging the real value that these services provide underneath their perversions.  </p><p>The pathology of the modern condition is to feel isolated, lonely and tuned-in; comfortable, satiated, and redundant; nihilistic, disillusioned, and deeply ideological.... with access to social media.  Only one of those is the real problem.  We need to stop focusing so much on the lever, and take a much closer look at the life of a rat in its cage.</p>]]></content:encoded></item><item><title><![CDATA[Contextualizing Data from the Facebook Emotion Contagion Study]]></title><description><![CDATA[<p>In 2014, Facebook published their famous <a href="https://www.pnas.org/content/111/24/8788">Emotion Contagion study</a>, to massive controversy.  This study is <em>still </em>frequently cited as an example of the manipulative potential of social media, which is an egregious characterization.  In this post, I'm going to walk through the statistics so we can see what we can</p>]]></description><link>https://blog.davidbramsay.com/facebook-contagion-data/</link><guid isPermaLink="false">60a688eec66b5c391c160c17</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Thu, 20 May 2021 17:18:07 GMT</pubDate><content:encoded><![CDATA[<p>In 2014, Facebook published their famous <a href="https://www.pnas.org/content/111/24/8788">Emotion Contagion study</a>, to massive controversy.  This study is <em>still </em>frequently cited as an example of the manipulative potential of social media, which is an egregious characterization.  In this post, I'm going to walk through the statistics so we can see what we can <em>really </em>say about the effect of Facebook's intervention on an average person.</p><p>Statistically, this study is simple– it's based on a trustworthy and large sample size, and basic statistical tests.  The paper only has one figure.  Despite that, its remarkably difficult to contextualize.   This essay is a deep dive into the numbers– a gut check of the data, and an examination of the common sense inferences we can actually make based on what they report. </p><h3 id="the-basics">The Basics</h3><p>The study includes 4 groups of ~175,000 people each– 2 intervention groups matched with 2 control groups.  Subjects in the intervention groups had either 'positive' or 'negative' posts removed from their newsfeeds, judged based on whether they contain a positive or negative word listed in the 'LIWC'.  </p><p>Not all of a subject's emotional posts were removed, though– for example, in the positive condition, subjects experienced between 10% and 90% of the positive newsfeed posts removed. This means the 'average' intervention removed half of someone's positive or negative posts.  </p><p>Over the course of a week, the study measured whether these changes in newsfeed content would alter the number of positive or negative words a subject subsequently <em>used themselves</em> when posting status updates.  Changes in word usage should prove that what you see online affects your mood– emotional contagion.</p><p>The study includes some interesting basic statistics about facebook usage. Based on their analysis of 3 million posts, an average post is ~41 words.  47% of posts contain a positive word and 22% contain a negative one, so the average positive post has 2.9 positive words and the average negative post has 2.8 negative words.  The differing rates of emotional content means positive group interventions are more invasive– deleting half of your positive posts replaces 23% of your news feed, while deleting half of your negative posts replaces only 11%.  </p><h3 id="the-results">The Results</h3><p>The major results are in the figure below.  We see that an average person uses ~5.2% positive words and ~1.75% negative words (dark blue controls).  </p><p>It turns out that people use fewer words when the content they see is less emotional.  By how much? 0.3% fewer words used if negative posts are (on average) half removed, and 3.3% fewer words in the positive case.</p><p>These differing rates means we can't compare group-level word averages–  since the strength of the intervention varies across subjects, we'd diminish the contribution of individuals who show the greatest effect (a more powerful intervention means fewer emotion words seen, and thus fewer words posted/counted).   </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/05/F1.large.jpg" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/05/F1.large.jpg 600w, https://blog.davidbramsay.com/content/images/size/w1000/2021/05/F1.large.jpg 1000w, https://blog.davidbramsay.com/content/images/size/w1280/2021/05/F1.large.jpg 1280w"><figcaption>Fig 1 from "Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. "Experimental evidence of massive-scale emotional contagion through social networks." <em>Proceedings of the National Academy of Sciences</em> 111.24 (2014): 8788-8790."</figcaption></figure><p>To analyze this in a way that maximizes the likelihood of finding an effect, the authors instead train a small weighted linear model to predict the likelihood that a subject will use a positive or negative word in their post that is <em>weighted by the likelihood that the subject had a given emotional post removed.</em>  The values we see in the plot above consider intervention subjects that had 90%(!) of their posts removed as 9x more important than people with 10% removed. </p><p>Thus, these results are<em> </em>skewed to represent the effect of closer to the weighted average response of  ~60%* of the positive or negative posts removed, if we assumed the effect is linear.  </p><p>But of course we shouldn't expect that.  Having a few positive posts removed probably does very little to you, as you'd still see a large amount of positivity; on the other hand, we'd expect a pretty drastic effect if nearly all of your positive posts are removed.  It's unlikely the relationship is a simple linear mapping.  Because we weigh a 90% user so much more, this analysis really over-emphasizes the effect for people with drastic interventions– it's likely more representative of the experience of someone with 65-75% of their posts removed.</p><p><strong>The reported effect sizes <em>heavily emphasize</em> users who received a drastic intervention.</strong>  It's almost impossible to know the relationship of this effect size with the intensity of the intervention, though a measurable difference in word use exists.  Unfortunately, demonstrating that a difference exists isn't particularly meaningful– we really care to know <em>how much</em> of an effect occurs at <em>what intensity</em> of intervention and <em>for whom</em>.</p><h3 id="an-unusual-discrepancy">An Unusual Discrepancy</h3><p>Digging further into the data, we notice <em>overall reported rates </em>of<em> </em>3.6% positive words and 1.6% negative words, whereas <em>per person</em> <em>results</em> in the figure show 5.2% of the words are positive and 1.7% are negative.     </p><p>I initially thought there might be an error in the study (3.6% + 1.6% = 5.2%; it looks suspiciously like the positive word results also include negative words).  After some back and forth with the authors, they suggested an alternative, unmentioned explanation– an underlying relationship where individuals who use more words use significantly fewer positive words.  </p><p>This has some serious implications for how we interpret the study.</p><p>I was able to roughly corroborate these numbers with other literature.  <a href="file:///Users/davidramsay/Downloads/2015-DoFacebookstatusupdatesreflectsubjectivewell-being.pdf">One other study</a> reported, for N=150,000, positive LIWC rates of 3.9% (SD=2.0%) and 1.8% (SD=1.1%) negative rates in status updates<em> overall; </em>in another<em> </em><a href="https://www.jmir.org/2018/5/e168?utm_source=TrendMD&amp;utm_medium=cpc&amp;utm_campaign=JMIR_TrendMD_0#ref5">very small study</a> (N=29) positive LIWC emotion word percentages range from 2% to 57% (average of 10%) <em>per person</em>, and negative word percentages from 0% to 17% (average of 4%).  Since this analysis was over several weeks, we can expect slightly higher variability in the shorter Facebook study.  </p><p>Based on some other literature from the same time period (<a href=" https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;arnumber=7403566">1</a>, <a href="https://www.jmir.org/2017/1/e7/PDF">2</a>), we know that the number of Facebook posts per user follows a power law distribution:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/06/image.png" class="kg-image" alt><figcaption><strong>Fig 2. Counts of users by their number of posts, based on ~7,500 users over 4 months. From</strong> <em>Devineni, P., Koutra, D., Faloutsos, M., &amp; Faloutsos, C. (2015). If walls could talk. Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015 - ASONAM ’15. doi:10.1145/2808797.2808880.</em></figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-1.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/06/image-1.png 600w, https://blog.davidbramsay.com/content/images/size/w958/2021/06/image-1.png 958w"><figcaption><strong>Fig 4. Actual vs. Self-Reported Posting Frequency, based on ~700 users over 6 months. From </strong><em>Smith, Robert J., et al. "Variations in Facebook posting patterns across validated patient health conditions: a prospective cohort study." Journal of medical Internet research 19.1 (2017): e7.</em></figcaption></figure><p>It's unclear what (if any) relationship there is between number of posts and length of post– I couldn't find any data about this.  Regardless, it seems reasonable to assume that we'll still end up with a power law distribution in words per person as we do for posting frequency.</p><p>Given this distribution,<strong> for the overall positive word rate to be 3.6% with a per person word rate at 5.2%, the <em>vast majority </em>of people are posting very few, very positive posts.  </strong>Most people post rarely, and it seems reasonable to imagine that most of their posts have to do with celebrations or special events.</p><p>In the below cohort study based on (predominantly young, black, and urban) hospital patients, we can see some clear trends that support the idea that positive words are used by people who post infrequently, who make up the vast majority of people (despite the unusual hospital-focused sample).  Patients with depression were significantly more likely to post more frequently (38 vs 22 times on average, N=~150, 550), a correlation which has been corroborated in <a href="https://dl.acm.org/doi/abs/10.1145/2675133.2675139?casa_token=4M6M4Fj3q0UAAAAA:7mTeqIPJZAgGEK3nUjGW100CtGOECDUdf01AtPTCMr88t0QEshbwgVHzN9yCmGAxqs7A02fPUyY-aA">studies outside of hospitals as well</a>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-3.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/06/image-3.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2021/06/image-3.png 1000w, https://blog.davidbramsay.com/content/images/size/w1332/2021/06/image-3.png 1332w"><figcaption><strong>Fig 3. LDA Topics based on posting frequency. From </strong><em>Smith, Robert J., et al. "Variations in Facebook posting patterns across validated patient health conditions: a prospective cohort study." Journal of medical Internet research 19.1 (2017): e7.</em></figcaption></figure><h2 id="contextualizing-the-data-what-does-it-mean">Contextualizing the Data: What Does it Mean?</h2><p>In the above sections we established a few key things:</p><p>(1) We have baseline positive and negative word use rates of 5.2% and 1.7% per person.  In the case of positive words, <strong>we know that this 5.2% per person number is heavily influenced by highly positive, infrequent posters who are most representative of the 'typical' user.</strong>  These users' positive words are probably congratulatory and unrelated to emotional state– though there is a small increase in negative words when positive posts are reduced, so some effect cannot be attributed to this explanation.  The averages obscure trends in real underlying behavior of different kinds of users.</p><p>(2) The analysis shows <strong>very small changes in word use behavior of 0.2 to 0.15%</strong>, but what this means for a typical user over intervention severity is unclear.  The technique is <strong>heavily weighted to represent a drastic intervention</strong>– something roughly like removing 70% of emotion posts from a person's feed.</p><p>If we put aside that these small effects are likely a result of infrequent congratulatory or conciliatory posts in response to personal news, we can focus on the smaller secondary effect (a.k.a. fewer positive posts sparking more negative words and vice versa).  While these small changes can be explained as a very subtle sort of social mirroring (fitting in with group behavior), they also could perhaps capture some real change in underlying emotional state.  Do they?</p><h3 id="does-your-positive-word-count-reflect-your-positive-emotions">Does Your Positive Word Count Reflect Your Positive Emotions?  </h3><p>Based on <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2015.01045/full">this study from 2015</a> (N=200), increased LIWC <em>negative word use</em> reflects real anxiety, depression, and stress, while <em>positive word use</em> is uncorrelated with psychological indicators.</p><p>This correlation is strongest in young people; younger people are more self-disclosing on social media and use more emotion words, so their linguistic features might be more relevant.  Older people are more prone to 'image management', which might also explain a decoupling between positive words and underlying emotional states.  <a href="https://journals.sagepub.com/doi/abs/10.1177/0261927X12456384?casa_token=h9DbIpvSy1IAAAAA%3Amrwgz_1w3tj3beXmu1RRzu_xL033nfrsQ7OFnGX9ayikLm6bWfY9xoCG9D6tvjbZeJu7aDK-vnGb&amp;journalCode=jlsa">Other studies</a> back up the notion that positive words correlate with image management and not emotional state.</p><p>"<a href="https://www.researchgate.net/publication/280059930_Do_Facebook_Status_Updates_Reflect_Subjective_Well-Being/link/5af0081c458515f599846228/download">Do Facebook Status Updates Reflect Subjective Well-Being?</a>"– a study from 2015– shows that for 1,100 participants, subjective wellbeing and the LIWC score are <em>only related</em> when users show evidence of negative language in the last 9 months.  If you look at the graph below, you can see that just looking at word usage over <strong>the most recent month or two is <em>not at all predictive </em>of subjective well-being</strong>; data over the last 8-10 months becomes a meaningful indicator, and adding data beyond 10 months reduces the correlation.   While it is a real correlation, it is still a very, very weak one (&lt;0.2).  Once again, positive word use is simply not meaningfully predictive at all.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/06/image-4.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/06/image-4.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2021/06/image-4.png 1000w, https://blog.davidbramsay.com/content/images/size/w1354/2021/06/image-4.png 1354w"><figcaption><strong>Fig 1. The relationship of LWIC emotion words to Subjective Wellbeing, depending on how many months of word use data are included. From </strong><em>Liu, Pan, et al. "Do Facebook status updates reflect subjective well-being?." <i>Cyberpsychology, Behavior, and Social Networking</i> 18.7 (2015): 373-379.APA</em></figcaption></figure><p>Another study from 2018, "<a href="http://selfcontrol.psych.lsa.umich.edu/wp-content/uploads/2019/09/Does-Counting-Emotion-Words-on-Online-Social-Networks-Provide-aWindow-Into-People%E2%80%99s-Subjective-Experience-of-Emotion-A-Case-Studyon-Facebook.pdf">Does Counting Emotion Words on Online Social Networks Provide a Window Into People’s Subjective Experience of Emotion? A Case Study on Facebook</a>", showed <strong>no relationship between LIWC word percentage and self-reported affect</strong>; however, human judges <em>reading </em>the posts were able to predict affect poorly but meaningfully (correlations around 0.15.) They summarize their results (for 185 college students) by quoting Pennebaker et al (2003):</p><blockquote>Virtually every psychologically based text analysis approach has started from the assumption that we can detect peoples’ emotional states by studying the emotion words they use... [but] in reviewing the various word use studies, it is striking how weakly emotion words predict people’s emotional state... taken together, it is our sense that emotion researchers should hesitate before embarking on studies that rely exclusively on the natural production of emotion words. (p. 571)</blockquote><p>Finally, "Emotional States vs. Emotional Words in Social Media" (2015) found similar weak (~0.15) correlations between affective ratings and LIWC scores across 515 Facebook users:</p><blockquote>...although we found a reliable correlation between negative affect on the PANAS and negative sentiment as measured by LIWC for Facebook status updates, at best the LIWC scores account for 4.2 percent of the variance in one’s reported negative affect.</blockquote><p>Their analysis <em>corroborates that 6 months of data is required</em>, and suggests that this kind of analysis really only makes sense for a small subset of self-reported 'highly emotionally expressive' people.</p><p>The conclusion is clear– <strong>in no way are changes in a week's worth of LIWC emotion word data indicative of any underlying emotional changes in Facebook users.  </strong>Even over long time scales, positive words seem meaningless as a predictive tool of underlying emotional state.</p><h2 id="finally">Finally</h2><p>The Facebook Emotion Contagion study shows incredibly small changes in emotion word rates from an invasive intervention.  These changes seem to be explained largely by common usage patterns (a typical user will post an infrequent, congratulatory post on a positive personal announcement); for the unaccounted difference that remains, there is substantial evidence refuting the idea that LIWC word choice captures anything meaningful about a user's emotional state.  For even the most severe manipulations, we cannot trace a causal pathway through user emotional.</p><p>For such a simple study, it is remarkably hard work to contextualize for typical Facebook users.  The paper uses statistical techniques that increase the probability of finding an effect at the expense of obscuring <em>how large</em> of an effect occurs at <em>what intensity</em> of intervention and <em>for whom.  </em>The point estimates we're given are reweighted by intervention type and averaged over an uneven distribution of users.</p><p>That makes our job– to contextualize the meaningful implications– significantly harder.  Effect size matters.  Missing this nuance not only can create a media firestorm over a totally benign intervention, it can also undermine the scientific value of the research.</p><p></p><p></p><p>*  <em>∫x*p(x) from .1 to .9, where p(x) = x / (∫x from .1 to 0.9) to normalize it so it's a probability distribution with area 1. </em></p>]]></content:encoded></item><item><title><![CDATA[How to Tell if your Eye-Tracking IR Diode is Safe]]></title><description><![CDATA[<p>Infrared (IR) Diodes are very popular now for illuminating the eye for eye-tracking purposes.  Of course, projecting stuff into your eye comes with hazards, and we want to make sure these things are safe.  I've been working on an IR blink sensor, and while many papers in academia seem to</p>]]></description><link>https://blog.davidbramsay.com/how-to-tell-if-your-eye-tracking-ir-diode-is-safe/</link><guid isPermaLink="false">606d2b60c66b5c391c1601e5</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Thu, 08 Apr 2021 21:19:05 GMT</pubDate><content:encoded><![CDATA[<p>Infrared (IR) Diodes are very popular now for illuminating the eye for eye-tracking purposes.  Of course, projecting stuff into your eye comes with hazards, and we want to make sure these things are safe.  I've been working on an IR blink sensor, and while many papers in academia seem to assume safety with a low power IR diode, I wanted to be a little more rigorous– especially since we're attempting to have them worn all day, every day by people.</p><p>The laissez faire attitude around Near IR is not totally unwarranted–  IR diodes do at least partially deserve their innocuous reputation.  IR is what we think of as heat, and we get a lot of it from the sun (the majority of the energy from the sun at the earth's surface is IR).  From what remains, ~40% of the sun's energy is visible light, and only a couple percent is UV (the actually dangerous stuff that gives you cancer and contributes to macular degeneration).  UV is typically the one we <em>really </em>need to worry about.</p><p>While we get quite a bit of IR radiation in the ambient world, we do need to worry about how it interacts with our eyes.  Our eyes have a few parts– the outer part is the <em><strong>cornea</strong></em>, which envelopes the entire eye; under the cornea, there is a little pocket of fluid called the <em><strong>aqueous humor </strong></em>which covers the <strong><em>pupil</em></strong> and the pigmented <em><strong>iris</strong></em> that surrounds it. Within the pupil is a <em><strong>lens</strong></em> that will focus light onto the <em><strong>retina</strong></em>.  Each of these parts can be damaged, and will have a different mode of damage.  </p><h3 id="causes-of-concern">Causes of Concern</h3><p>Increased IR exposure <em>does </em>put you at increased risk of cataracts.  Cataracts are pretty common (<a href="https://www.nei.nih.gov/learn-about-eye-health/resources-for-health-educators/eye-health-data-and-statistics/cataract-data-and-statistics">68% of people over 80 have them in the US</a>), and they occur when the lens of the eye gets cloudy.  IR is believed to contribute to their formation if the lens of the eye absorbs the IR light and heats up.  In '<a href="https://pubmed.ncbi.nlm.nih.gov/6524322/">Infrared radiation and cataract II. Epidemiologic investigation of glass workers</a>', we find that folks working in the glass industry for 20 years have &gt;10 times the risk of cataracts.   Similar studies have been done on steel and iron workers.</p><p>It is also possible to damage the retina of the eye with IR radiation, though the data here seems less clear.   Other wavelengths are more tightly coupled to age related macular degeneration– near IR exposure <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5364001/">may actually improve retinal function</a> as you age.  Based on a recent nature paper entitled '<a href="https://www.nature.com/articles/eye2015266">Does infrared or ultraviolet light damage the lens?</a>', it seems that photochemical damage is unlikely, and we really only need to worry about thermal effects would lead to retinal damage.   This means the typical relationship for photochemical injury– where damage is integrated over intensity/duration– doesn't apply.  Instead, thermal damage only happens when tissue heats beyond a certain point. </p><p>This can happen with a large burst of coherent light (much like a visible laser).  This is really only a concern for the nearest of the near IR, because these wavelengths are close enough to visible wavelengths to actually make it through the lens of the eye to the retina (as you get higher wavelengths, they simply get absorbed by the cornea and lens and don't pass through).  Of course, the lens of the eye focuses light on the retina, and this can have a hugely multiplicative effect on concentrated hot spots (focusing the power up to 100,000 times more than what would be seen at the cornea).  This focusing works exactly like visible light, and the distance of the source is important to how focused it will be.  From '<a href="https://www.sciencedirect.com/science/article/pii/S0924424708004718">Class I infrared eye blinking detector</a>' we find the following:</p><blockquote>In case of a point-type and diverging beam source, as a LED, the hazard increases with decreasing distance between the beam source and the eye. This is true until distance is greater than the shortest focal length. Thus for distance less than the shortest focal length there is a rapid growth of the retinal image and a corresponding reduction of the irradiance, even though more power may be collected.</blockquote><p>So it turns out there is actually a complicated relationship beyond just distance to irradiance at the retina, which has to do with the focal effects and optics of the lens.  </p><p>The paper '<a href="https://pubmed.ncbi.nlm.nih.gov/21380486/">Eye safety related to near infrared radiation exposure to biometric devices</a>' suggests that IR-induced warmth on the outer eye will cause pain and blinks to conduct heat away, and that the main thing we should worry about for near IR is retinal exposure because it is conducted like visible light to the retina, but doesn't cause a blink reflex like bright visible light would.  This is frequency dependent, as we can see from the following graph:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/04/image.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/04/image.png 600w, https://blog.davidbramsay.com/content/images/size/w850/2021/04/image.png 850w"><figcaption>from 'Söderberg, P., Talebizadeh, N., Yu, Z. et al. Does infrared or ultraviolet light damage the lens?. Eye 30, 241–246 (2016). https://doi.org/10.1038/eye.2015.266'. We see the transmittance (ratio of radiant flux or power that makes it through) to the outer cornea, the outer lens, and the inner lens, relative to wavelength. We see that once we get to Near-IR, less makes it through the outer structure of the eye, and most of our health concern becomes about the lens and cornea.&nbsp;</figcaption></figure><p>the region we care about actually has quite varied behavior; on the low side (760nm) around 80% of the power makes it to the retina, and almost none is absorbed by the cornea or lens; as we move higher, we see less than 50% makes it to the retina, with almost 50% split between the cornea and the lens.  We'll see when we calculate 'retinal hazard', we weight the energy by a value that captures this transmittance– so the higher we go, the less we have to worry about the retina and the more we have to worry about the cornea and lens.  Where we should focus our efforts depends on the wavelength we choose.</p><p>Another analysis of cow eye parts shows how they absorb over wavelengths (cows have pretty similar eyes to us):</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://blog.davidbramsay.com/content/images/2021/04/image-5.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/04/image-5.png 600w, https://blog.davidbramsay.com/content/images/size/w609/2021/04/image-5.png 609w"></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/04/image-6.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/04/image-6.png 600w, https://blog.davidbramsay.com/content/images/size/w609/2021/04/image-6.png 609w"><figcaption><strong>From</strong> Yust, B.G., Mimun, L.C. &amp; Sardar, D.K. Optical absorption and scattering of bovine cornea, lens, and retina in the near-infrared region. <i>Lasers Med Sci </i><b>27, </b>413–422 (2012). https://doi.org/10.1007/s10103-011-0927-9. <strong>These are absorption values of the cornea and lens of the cow eye, approximated using three separate mathematical techniques (LM/IAD/MC). The models don't give totally consistent results, but the trends are there– peaks in the absorption of the cornea around 960, and increasing absorption of the lens from 900 up, with almost no absorption in the region right by the visible spectrum (as we'd expect). </strong></figcaption></figure><p>For the cornea, we care about temperature equilibrium under continuous exposure.  It turns out this is a complicated thing to understand and contextual, but we have a few interesting tidbits– the iris is very absorptive, the pupil is very dynamic (contracting from ~7mm in dark to ~1.6 mm in light), and blinks happen once every ~3 seconds for ~0.3 seconds (10% of the time our eye is closed!).  That's not factoring in the other main mechanisms governing temperature– blood flow in the eye, conduction through tissue, body temperature, and ambient temperature.  Ocular surface temperature can vary quite a bit depending on the weather!</p><h3 id="measurements">Measurements</h3><p>We care about power [W], which gets the name 'Radiant Flux' when we look at how much is delivered through an specific area, and 'Irradiance' [mW/cm^2] when we normalize by the area.  Here, when we talk about the irradiance (power/area at the eye), we've accounted for the distance between the source and the eye already– the power per area obviously drops as we move further away. If we'd rather measure power through an area in a way that is distant agnostic, we can also talk about the 'Radiance' [mW/cm<sup>2</sup>/sr] where a steradian [sr] is a 3D radian which looks like a spotlight cone– this tells us how much power is delivered per area out in one small angular cone from a point source.  The easiest way to think of this is that the area mm<sup>2</sup> per angle sr <em>grows</em> as we move away from the source, so we have our power averaged over the squared mm per cone.  Of course, we typically are interested in a frequency specific measurement for all of these quantities, so frequently they will be quoted normalized over a 1 nm band of wavelength (i.e. 'irradiance' per nm, which is still called 'irradiance').  </p><p> IR-radiation is anything above visible light in wavelength on the spectrum between 760 nm (or 0.76 um) and 1 mm (or 1000 um)  (remember, a higher/longer wavelength, in <em>nm</em>, means a slower frequency in <em>Hz</em>).  It's typically divided into IR-A, -B, and -C or Near, Mid, and Far, but be careful because these distinctions don't map perfectly to each other.  For the types of diodes and applications we care about, we'll be strictly in the shortest wavelength Near-IR/A range (760 nm-1400 nm).</p><p>In this range, the sun's irradiance at the earth's surface falls linearly from about 0.1 to 0.05 mW/cm<sup>2</sup>/nm, so we find ourselves in about 32 mW/cm<sup>2</sup> over that entire 760-1400 nm bandwidth; however estimates of IR background exposure outside (when you're not looking directly at the sun) are roughly 1 mW/cm<sup>2</sup>.   </p><p>The final thing we need to understand before we move forward is 'angular subtense' (alpha), which is simply a way to describe how large something will be on the cornea (i.e., how large of a 3D cone described by angle alpha, in radians).  It describes the <em>minimum image size on the retina</em> that can be produced by focusing on the object– what we care about is how much the eye can help concentrate the power of incoming waves on a small area.   </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/04/image-3.png" class="kg-image" alt><figcaption>Illustration of angular subtense (alpha), from Paudel, Rupak, et al. "Modelling of free space optical link for ground-to-train communications using a Gaussian source." <i>IET Optoelectronics</i> 7.1 (2013): 1-8.</figcaption></figure><p>See '<a href="http://irpa11.irpa.net/pdfs/8c6.pdf">Location and size of the apparent source for laser and optical radiation ocular hazard evaluation</a>' for a good review.  It's worth noting that the lenses in our eyes changes shape to focus at different distances (accomodation) in addition to the changes in angle between our eyes when we gaze at something close vs. far way (vergence), and rules are different for collimated or coherent light.  </p><h3 id="rules-of-thumb-for-the-retina">Rules of Thumb for the Retina</h3><p>In '<a href="https://www.ece.ucf.edu/seniordesign/fa2019sp2020/g29/8%20Page%20Conference%20Paper%20SD2.pdf">Eye Tracking Headset Using Infrared Emitters and Detectors</a>' they calculated a retinal hazard threshold of 92 W/cm<sup>2</sup>/sr for 10 minute exposures.</p><p>The ICNIRP guidelines '<a href="https://www.icnirp.org/cms/upload/publications/ICNIRPVisible_Infrared2013.pdf">On Limits of Exposure to Incoherent Visible and Infrared Radiation</a>' suggest for long duration exposure, we should be less than 190o W/cm<sup>2</sup>/sr for small sources and less than 28 W/cm<sup>2</sup>/sr for large sources (smaller sources will diffuse as the eye moves more easily).  The below plot shows between 6<sup> </sup>and 60 W/cm<sup>2</sup>/sr depending on apparent size on the retina.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/04/image-10.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/04/image-10.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2021/04/image-10.png 1000w, https://blog.davidbramsay.com/content/images/size/w1134/2021/04/image-10.png 1134w"><figcaption>We see Retinal Exposure Limit Values (ELV) between 6 and 60 W/cm<sup>2</sup>/sr for long term exposure. , from Kourkoumelis, Nikolaos, and Margaret Tzaphlidou. "Eye safety related to near infrared radiation exposure to biometric devices." TheScientificWorldJOURNAL 11 (2011): 520-528.</figcaption></figure><h3 id="regulation-for-the-retina">Regulation for the Retina</h3><p>This formula is for retinal damage, based on the 'burn hazard effective radiance' for stimuli that don't come with a visual stimulus that would cause natural avoidance:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/04/image-1.png" class="kg-image" alt><figcaption>Used in '<a href="https://pubmed.ncbi.nlm.nih.gov/21380486/">Eye safety related to near infrared radiation exposure to biometric devices</a>', '<a href="https://www.ece.ucf.edu/seniordesign/fa2019sp2020/g29/8%20Page%20Conference%20Paper%20SD2.pdf">Eye Tracking Headset Using Infrared Emitters and Detectors</a>' (See citation in figure above), and from the documentation from the <a href="https://ehs.lbl.gov/resource/documents/radiation-protection/non-ionizing-radiation/light-and-infrared-radiation/">US EHS</a> on occupational safety. The actual specification actually uses 0.63.</figcaption></figure><p>Where L(λ) is the average spectral radiance in a band (W/cm<sup>2</sup>/sr/nm) times the bandwidth range it's over, weighted by a unitless R function based on how much that frequency is transmitted to the retina.  We sum this over all the bands in the range of interest.</p><p> There are several worst case assumptions that go with this– a fully dilated pupil (7mm diameter), and irradiance measured at 20cm (ANSI Z136.1) or 10cm (IEC 60825-1) because this is the lower limit of eye accomodation, which means it is the closest you can get to the eye where the eye will focus the power on the smallest section of the retina (worst case, closer makes it blurred).  Notice the limit is a 'radiance' limit– that is, it assumes the worst case distance, and is simply the amount of radiated power in a given angular slice from the point source.  Alpha is the 'angular subtense' of the source and is taken as <strong>0.011 rad </strong>if the apparent source size is small than that.</p><p>For us, our LED is roughly 1.5x2.5mm, which we'll call 2mm.  Let's do our basic math:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/04/image-8.png" class="kg-image" alt><figcaption>Math! We get alpha = 0.020 radians for 100mm away, and 0.010 radians for the 'standard' 20 cm. The second of those is small enough that we'd have to instead use the standard's default smallest value of 0.011 radians instead.</figcaption></figure><p>It's actually assumed the angular subtense increases over exposure time, as your eye moves and blurs the source around on the retina (which is good if we're trying not to damage it, hence the lower limit).  Given the above alpha, the target is 0.6/0.02 = 30 W/cm<sup>2</sup>/sr.  </p><p>The R values come from here (note there is an equation in the caption):</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/04/image-2.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/04/image-2.png 600w, https://blog.davidbramsay.com/content/images/size/w808/2021/04/image-2.png 808w"><figcaption>From '<a href="https://pubmed.ncbi.nlm.nih.gov/21380486/">Eye safety related to near infrared radiation exposure to biometric devices</a>', which is really a great resource for this kind of thing (see citation in ELV figure above). EHS also provides the equation Or using the equation R(λ) = 10<sup>[(700- λ)/500]</sup>.</figcaption></figure><p>As we saw above earlier, the amount of energy that gets absorbed by the cornea and the lens at this frequency is actually quite a lot, so we have an R of ~0.33.</p><p>Now we need some idea of the Source Radiance over our bandwidth.  Now we need to translate the following (from the datasheet) into power:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/04/image-11.png" class="kg-image" alt><figcaption>Relative radiant intensity over angle for our LED, from the QRE1113 datasheet.</figcaption></figure><p>Unfortunately this is the only piece of information we have, other than that we can set the power through the diode with a current limiting resistor.  We've set ours to about 25mW through the diode of electrical power.  Current IR LED technology has a peak 'Wall-Plug Efficiency' of 40-50% (electrical to optical power), so we'll assume that our LED is doing really well and has an efficiency of 50%– and thus an optical power of around 12.5mW.</p><p>If we take the worst case (most intense) angle above, +-10 degrees has an average intensity of 0.98, and appears to capture about 18.5% of the total radiated power (Calculated by looking at average intensity across each 10 degree band).  So we'll guess that, right in the middle of the beam, we get around 2.5mW over a 20 degree angle (0.35 radians).  This could be calculated more accurately using Lambertian assumptions (LEDs typically have a well defined emission pattern), but this back of the envelope should be fine.    </p><p>Step 1 is to turn that 2D cross section to a 3D cone steradian– not a trivial task.  <a href="https://math.stackexchange.com/questions/447586/number-of-radians-in-one-steradian-cross-section">Stack Overflow</a> and <a href="https://en.wikipedia.org/wiki/Steradian">Wikipedia</a> tell us that for angle 2<em>θ </em>(θ=0.175 radians for us):</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/04/image-12.png" class="kg-image" alt><figcaption>which gives something like a 1 str 'solid angle' corresponds to θ = 0.572 rad or 32.77 degrees.&nbsp;</figcaption></figure><p>So we get 0.096 sr.  Our 2.5mW thus gives 26.05 mW/sr.  We then divide this by the size of our radiating source (4 mm<sup>2</sup> = 0.04 cm<sup>2</sup>) because it's not a perfect point source, so we need to account for its area (if this is confusing, take a look at the light bulb drawing on <a href="https://www.energetiq.com/technote-understanding-radiance-brightness-irradiance-radiant-flux">this website</a>).  We get <strong>650 mW/cm<sup>2</sup>/sr</strong>.</p><p>This value is assuming <em>all of the power across all frequencies</em> that the diode is converting to light (its entire bandwidth)<em>.  </em>If we want to convert this into a bandwidth agnostic number, we have to divide out our expected bandwidth (probably on the order of 20-50nm), but that would be silly since we really just care about total delivered power in the equation above.  If we apply our R value, we get a weighted 650*0.33 = 215 mW/cm<sup>2</sup>/sr over our entire bandwidth.  </p><p>That 0.215 W number is quite a ways away from the 30W limit we calculated; keep in mind as well that we used worse case metrics for this as well (a fully dilated pupil, from the worst case focal distance, right in the hot spot of the LED).  In reality we should expect something smaller; it seems we're good to go for unlimited time as far as the retina is concerned.</p><h3 id="rules-of-thumb-for-the-cornea-lens">Rules of Thumb for the Cornea/Lens</h3><p>Roughly, background IR from sun exposure at the eye is ~1 mW/cm<sup>2</sup>, though we apparently get between 20-40 mW/cm<sup>2</sup> on our skin.</p><p>Occupationally, we see rates of 15-180 mW/cm<sup>2</sup> daily for glass blowers at the eye, and 200-600 mW/cm<sup>2</sup> for metal workers.  Over a decade lead to extreme increase in cataract risk.</p><p>In '<a href="https://www.ece.ucf.edu/seniordesign/fa2019sp2020/g29/8%20Page%20Conference%20Paper%20SD2.pdf">Eye Tracking Headset Using Infrared Emitters and Detectors</a>' they set their safety limit at 14.8 mW/cm<sup>2</sup> for 10 minute exposures.  (they apparently use 5 LEDs, as well as the QRE1113). </p><h3 id="regulations-for-the-cornea-lens">Regulations for the Cornea/Lens</h3><p>The Cornea and Lens area are probably more of a concern, because that's where the cataracts occur, these structures are moderately absorptive at these frequencies, and the sun background exposure typically doesn't get very high.</p><p>The International Commission of Non-Ionizing Radiation Protection (ICNIRP)/ IEC-62471 suggests ocular exposure should not exceed 10 mW/cm<sup>2</sup> for chronic exposure, though around 100 mW/cm<sup>2</sup> is the suggested tolerance for IR lasers– these recommendations are to protect the cornea.  The formula is: </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/04/image-4.png" class="kg-image" alt><figcaption>From US EHS calculation guide; this value should be less than or equal to 0.1 W/cm<sup>2</sup></figcaption></figure><p>We can get at the basic rule of thumb here– our diode puts out 12.5 mW from a 2x2mm source– that's ~312.5 mW/cm<sup>2</sup> directly at the source (if we stuck it right on our eyeball).  It's probably ill-advised to do this, even if it would only be heating a very small part of the eye.</p><p>for our 2x2 source, let's go back to our most intense direct emission, which was within the 20 degrees directly in front of the diode.  We saw this captures about 20% of the radiated power, or 2.5mW. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2021/04/image-14.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/04/image-14.png 600w, https://blog.davidbramsay.com/content/images/size/w712/2021/04/image-14.png 712w"><figcaption>A quick diagram of the power spreading from the 20 degree beam-width in front of the diode. To calculate our minimum worst case distance in mW/cm<sup>2</sup>, we simply solve <em>optical power [mW]</em> / (<em>source dim[mm]</em> + 2* <em>distance[mm] </em>*tan(<em>angle[rad]</em>))<sup>2</sup> = 10 mW/cm<sup>2</sup>.</figcaption></figure><p>So we can solve the minimum safety distance for our example with the most concentrated part of the beam (2.5mW source power radiating at a 10 degree angle from a 2x2mm) by solving (sqrt(2.5/10)-.2)/(2*tan(10 deg)).  Be careful with units– everything should be in <em>cm </em>since our target value uses them.</p><p>To hit the 10mW/cm<sup>2</sup> safety target, we need to be ~8.5 mm away.  <a href="https://www.sciencedirect.com/science/article/pii/S0924424708004718#bib23">Another way to do this calculation</a> is based on the beam divergence that includes 63% of the power, but assume it includes 95% of the power.  That gives a 70 degree beam at 12.5mW and a safe distance of 6.5 mm.</p><p>For a sense of scale, your eyelid itself is around 1 mm thick, and eyelashes are longer than 10 mm on average.  Based on our measurements of eye distance from the sensor, we should be far enough away not to worry (and these calculations assume really the worst case– very close, very direct and concentrated power, your eye isn't moving around, plus a significant overestimates of concentrated power) and of course the recommendations have margin.  Moreover, power falls off with distance squared, so any extra distance adds an exponential safety margin.</p><p>Moreover, the real place we're concerned about is the lens, as well, which is further from the diode and unlikely to spend time in continuous direct exposure.  These recommendations are made with the assumption that the irradiance is constant in the space, bathing the whole eye– obviously the eye will be able to conduct heat away in our case more easily because only the nearest part of the eye will experience the peak irradiance, and that part is still under the safety margin for the geometry we've built.</p><h3 id="other-people-s-work">Other People's Work</h3><p>'<a href="https://www.ece.ucf.edu/seniordesign/fa2019sp2020/g29/8%20Page%20Conference%20Paper%20SD2.pdf">Eye Tracking Headset Using Infrared Emitters and Detectors</a>' set their safety limit for 10 minute exposures; they apparently use 5 LEDs together, which seem to have all been QRE1113s.</p><p>'DualBlink' uses the QRE1113 set at 20mW.</p><p>In '<a href="https://www.sciencedirect.com/science/article/pii/S0924424708004718">Class I infrared eye blinking detector</a>' they do a thorough analysis on their system, which includes a Agilent HSDL9100-21 IR-transmitter and receiver (940nm, 50nm bandwidth, 1.8mm source, apparent source size of 0.8mm, 26 degree beam divergence, which is the area for 63% of power).  They directly measured 7.6 uW of Radiant flux in a standards compliant way, and using the strictest standards designed it to be operational for 8 hours a day with a 1kHz sampling period; they use IEC 60825-1, which treats Lasers and LEDs the same and thus dramatically over-estimates the risk.</p><p>According to <a href="https://pubmed.ncbi.nlm.nih.gov/21380486/">Eye safety related to near infrared radiation exposure to biometric devices</a>:</p><blockquote>Eye safety is not compromised with a single LED source using today's LED technology, having a maximum radiance of approximately 12 Wcm−2 sr−1 [12]. Multiple LED illuminators, however, may potentially induce eye damage if not carefully designed and used.</blockquote><h3 id="best-references">Best References</h3><p><a href="https://www.icnirp.org/cms/upload/publications/ICNIRPVisible_Infrared2013.pdf">ICNIRP GUIDELINES ON LIMITS OF EXPOSURE TO INCOHERENT VISIBLE AND INFRARED RADIATION</a>.</p><p><a href="https://drive.google.com/file/d/18-2zut0t53IpSuPI7xyNNZ-I1J7JMssi/view">EHS Document on Light and Near-Infrared Threshold Limit Values</a>.</p><p><a href="https://pubmed.ncbi.nlm.nih.gov/21380486/">Eye safety related to near infrared radiation exposure to biometric devices</a>.</p><p><a href="https://www.sciencedirect.com/science/article/pii/S0924424708004718#bib23">Class I infrared eye blinking detector</a>.</p>]]></content:encoded></item><item><title><![CDATA[A Basic Example of Firebase React Native]]></title><description><![CDATA[<p>Below I will walk through the steps to make a simple app with firebase and react native; my goal is to have anonymous but persistent ID associated with each phone, which securely logs into the server and can push/query only it's own data.  This example also takes advantage of</p>]]></description><link>https://blog.davidbramsay.com/reactnativefirebase/</link><guid isPermaLink="false">603075e7c66b5c391c15faa7</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Sun, 21 Feb 2021 17:51:10 GMT</pubDate><content:encoded><![CDATA[<p>Below I will walk through the steps to make a simple app with firebase and react native; my goal is to have anonymous but persistent ID associated with each phone, which securely logs into the server and can push/query only it's own data.  This example also takes advantage of the caching associated with Firebase's library for react-native; even when the phone is offline, the firebase code seamlessly logs the user in *as if* they were online, and presents a cached/persistent datastore that will automatically push new data to the cloud on reconnect.  It also serves cached data when offline, after the actual cloud connection attempts time out.  It makes for a completely seamless experience of persisent/cached data, security, and user identity, with almost no effort.</p><hr><h2 id="incorporating-persistent-firebase-authentication">Incorporating Persistent Firebase Authentication</h2><p>First we'll set up our project:</p><pre><code>npx react-native init ReactNativeTestFirebase
cd ReactNativeTestFirebase
npm install --save @react-native-firebase/app</code></pre><p>open the .xcworkspace project.  Click on the main project; change the bundle identifier and name to something nice.  Copy the bundle name.  Change signing and capabilities to our personal team.</p><p>create app on firebase named 'WatchV1App'.  Now add an iOs app, copy the bundle identifier and register the app.  Download the associated file and drop it into the xcode project folder where the Info.plist file is, selecting all targets.</p><p>In firebase, go to authentication, click get started, enable 'anonymous'.</p><p>Go to ios/WatchV1App/AppDelegate.m, at top add</p><p><code>#import &lt;Firebase.h&gt;</code></p><p>and at the beginning of <code>didFinishLaunchingWithOptions:(NSDictionary *)launchOptions</code> add </p><pre><code class="language-text">if ([FIRApp defaultApp] == nil) {
    [FIRApp configure];
  }</code></pre><p>on the next line (first line of the method after the opening bracket).</p><p>Now we'll install the firebase tools we need to use, firestore and auth:</p><pre><code>cd ios
pod install --repo-update
cd ..
npm install --save @react-native-firebase/auth
npm install --save @react-native-firebase/firestore
cd ios &amp;&amp; pod install 
cd ..
npx react-native run-ios</code></pre><p>Now we're setup.</p><p>We'll replace our App.js file as follows:</p><pre><code>/**
 * Sample React Native App
 * https://github.com/facebook/react-native
 *
 * @format
 * @flow strict-local
 */

import React, {useState, useEffect} from 'react';
import {
  SafeAreaView,
  StyleSheet,
  ScrollView,
  View,
  Text,
  StatusBar,
} from 'react-native';

import {
  Header,
  LearnMoreLinks,
  Colors,
  DebugInstructions,
  ReloadInstructions,
} from 'react-native/Libraries/NewAppScreen';

import auth from '@react-native-firebase/auth';

auth()
  .signInAnonymously()
  .then(() =&gt; {
    console.log('User signed in anonymously');
  })
  .catch(error =&gt; {
    if (error.code === 'auth/operation-not-allowed') {
      console.log('Enable anonymous in your firebase console.');
    }

    console.error(error);
  });


function App() {
  // Set an initializing state whilst Firebase connects
  const [initializing, setInitializing] = useState(true);
  const [user, setUser] = useState();

  // Handle user state changes
  function onAuthStateChanged(user) {
    setUser(user);
    if (initializing) setInitializing(false);
  }

  useEffect(() =&gt; {
    const subscriber = auth().onAuthStateChanged(onAuthStateChanged);
    return subscriber; // unsubscribe on unmount
  }, []);

  if (initializing) return null;

  if (!user) {
    return (
      &lt;&gt;
      &lt;StatusBar barStyle="dark-content" /&gt;
      &lt;SafeAreaView&gt;
        &lt;ScrollView
          contentInsetAdjustmentBehavior="automatic"
          style={styles.scrollView}&gt;
          &lt;Header /&gt;
          &lt;View style={styles.body}&gt;
            &lt;View style={styles.sectionContainer}&gt;

        &lt;Text&gt;Error connecting to Firebase&lt;/Text&gt;

        &lt;/View&gt;
        &lt;/View&gt;
      &lt;/ScrollView&gt;
      &lt;/SafeAreaView&gt;
      &lt;/&gt;
    );
  }

  return (
      &lt;&gt;
      &lt;StatusBar barStyle="dark-content" /&gt;
      &lt;SafeAreaView&gt;
        &lt;ScrollView
          contentInsetAdjustmentBehavior="automatic"
          style={styles.scrollView}&gt;
          &lt;Header /&gt;
          &lt;View style={styles.body}&gt;
            &lt;View style={styles.sectionContainer}&gt;

            &lt;Text&gt;Welcome {user.email}&lt;/Text&gt;

        &lt;/View&gt;
        &lt;/View&gt;
      &lt;/ScrollView&gt;
      &lt;/SafeAreaView&gt;
      &lt;/&gt;
  );
}

const styles = StyleSheet.create({
  scrollView: {
    backgroundColor: Colors.lighter,
  },
  engine: {
    position: 'absolute',
    right: 0,
  },
  body: {
    backgroundColor: Colors.white,
  },
  sectionContainer: {
    marginTop: 32,
    paddingHorizontal: 24,
  },
  sectionTitle: {
    fontSize: 24,
    fontWeight: '600',
    color: Colors.black,
  },
  sectionDescription: {
    marginTop: 8,
    fontSize: 18,
    fontWeight: '400',
    color: Colors.dark,
  },
  highlight: {
    fontWeight: '700',
  },
  footer: {
    color: Colors.dark,
    fontSize: 12,
    fontWeight: '600',
    padding: 4,
    paddingRight: 12,
    textAlign: 'right',
  },
});

export default App;
</code></pre><p>This now will connect with a persistent anonymized user ID; when sign in occurs and authenticates we get onAuthStateChanged() called.   We should be able to see ourselves in the authorization section of firebase after we run the app.</p><hr><h3 id="set-up-firestore">Set up Firestore</h3><p>Now we will set up 'conditions' and 'events' collections.  These will be creatable by an anonymized but logged in user; they'll contain an 'id' field that matches their logged in key; and they will only be readable if the id field is their own.  This will partition data by user.</p><p>Firestore is nice in that it is schema-less; we can simply push arbitrary json blobs to it any collection of documents.</p><p>We start in the firestore UI; we create a datastore in a central US region in 'production' mode.  we then 'start a collection'.  I'll make one called 'conditions', and it will have some fake data (autoid, timestamp:timestamp, temperature/humidity/lux/whitelux numbers, and uid string). For the uid string, I'll copy the UID from the authenticated user section.  I'll also make a conditions document with fake data and a <em>different </em>uid so we can check access.</p><p>We'll do a similar thing for 'event's.</p><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/02/image-1.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/02/image-1.png 600w, https://blog.davidbramsay.com/content/images/size/w942/2021/02/image-1.png 942w"></figure><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/02/image-2.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/02/image-2.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2021/02/image-2.png 1000w, https://blog.davidbramsay.com/content/images/size/w1309/2021/02/image-2.png 1309w"></figure><p></p><p>Next we'll go to rules.  We want to add rules for all documents; we want to be able to read and create if the uid of the data equals the uid of the auth, otherwise nothing is allowed (no 'update', and write gives delete, create, and update permissions.  Read can be broken into get and list (ability to access one document at a time vs list all of them)).</p><p>The default rule will match any document in our database (this is a recursive matching syntax).  We will simply update this rule as follows:</p><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/02/image-3.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/02/image-3.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2021/02/image-3.png 1000w, https://blog.davidbramsay.com/content/images/size/w1044/2021/02/image-3.png 1044w"></figure><pre><code>rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /{document=**} {
      allow create:
        if request.resource.data.uid == request.auth.uid;
      allow read:
      	if resource.data.uid == request.auth.uid;
      allow update, delete: 
        if false;
    }
  }
}</code></pre><p>This should make it so only create/read are allowed, and users can only read/write data with their own UID (assigned anonymously with the session).</p><p>Now let's edit our app to be able to read and publish data.</p><pre><code>/**
 * Sample React Native App
 * https://github.com/facebook/react-native
 *
 * @format
 * @flow strict-local
 */

import React, {useState, useEffect} from 'react';
import {
  SafeAreaView,
  StyleSheet,
  ScrollView,
  View,
  Text,
  StatusBar,
  Button,
} from 'react-native';

import {
  Header,
  LearnMoreLinks,
  Colors,
  DebugInstructions,
  ReloadInstructions,
} from 'react-native/Libraries/NewAppScreen';

import auth from '@react-native-firebase/auth';
import firestore from '@react-native-firebase/firestore';

auth()
  .signInAnonymously()
  .then(() =&gt; {
    console.log('User signed in anonymously');
  })
  .catch(error =&gt; {
    if (error.code === 'auth/operation-not-allowed') {
      console.log('Enable anonymous in your firebase console.');
    }

    console.error(error);
  });

const conditionsCollection = firestore().collection('conditions');
const eventsCollection = firestore().collection('events');


function App() {
  // Set an initializing state whilst Firebase connects
  const [initializing, setInitializing] = useState(true);
  const [user, setUser] = useState();
  const [conditionsArray, setConditionsArray] = useState([]);
  const [eventsArray, setEventsArray] = useState([]);

  // Handle user state changes
  function onAuthStateChanged(user) {
    setUser(user);

    conditionsCollection.where("uid","==", user.uid).get().then(querySnapshot =&gt; {
      let cArray = [];
      querySnapshot.forEach(doc =&gt; {
        cArray.push(doc.data());
      });

      setConditionsArray(cArray);

      eventsCollection.where("uid","==", user.uid).get().then(querySnapshot =&gt; {
        let eArray = [];
        querySnapshot.forEach(doc =&gt; {
                eArray.push(doc.data());
        });

        setEventsArray(eArray);

        if (initializing) setInitializing(false);

    }, error =&gt; {console.log(error.code);});


    }, error =&gt; {console.log(error.code);});
  }


  async function addEvent(timestamp, type, data){
    console.log('sending event for user ' + user.uid);

    let eventdoc = {
        uid: user.uid,
        timestamp: timestamp,
        type: type,
        data: data
    };

    setEventsArray([...eventsArray, eventdoc]);
    await eventsCollection.add(eventdoc);

  }

  async function addCondition(timestamp, temp, humd, lux, wlux){
    console.log('sending condition for user ' + user.uid);

    let conditiondoc = {
        uid: user.uid,
        timestamp: timestamp,
        temperature: temp,
        humidity: humd,
        lux: lux,
        whitelux: wlux
    };

    setConditionsArray([...conditionsArray, conditiondoc]);
    await conditionsCollection.add(conditiondoc);
  }

  function addRandomEvent(){
    let ts = new Date();
    addEvent(ts, 'TX_EXAMPLE_TYPE', 2);
  }

  function addRandomCondition(){
    let ts = new Date();
    addCondition(ts, Math.random(), Math.random(), Math.random(), Math.random());
  }

  async function getAllConditions(){
    return await conditionsCollection.get();
  }

  async function getAllEvents(){
    return await eventsCollection.get();
  }

  useEffect(() =&gt; {
    const subscriber = auth().onAuthStateChanged(onAuthStateChanged);
    return subscriber; // unsubscribe on unmount
  }, []);

 const conditionItems = conditionsArray.map((conditions) =&gt;
       &lt;Text key={conditions.timestamp + conditions.uid}&gt;
        {conditions.timestamp.toString()} {"\n"} {conditions.temperature} {"\n\n"}
       &lt;/Text&gt;  );

  return (
      &lt;&gt;
      &lt;StatusBar barStyle="dark-content" /&gt;
      &lt;SafeAreaView&gt;
        &lt;ScrollView
          contentInsetAdjustmentBehavior="automatic"
          style={styles.scrollView}&gt;
          &lt;Header /&gt;
          &lt;View style={styles.body}&gt;
            &lt;View style={styles.sectionContainer}&gt;

            {user ?
                &lt;Text&gt;Welcome {user.uid}&lt;/Text&gt; :
                &lt;Text&gt;User not logged in&lt;/Text&gt;
            }
            {initializing ?
                &lt;Text&gt;initializing&lt;/Text&gt; :
                &lt;Text&gt;initialized&lt;/Text&gt;
            }

        &lt;Button
        title="Send Random Condition"
        color="#010101"
        onPress={addRandomCondition.bind(this)}
        /&gt;

        {conditionItems}

        &lt;/View&gt;
        &lt;/View&gt;
      &lt;/ScrollView&gt;
      &lt;/SafeAreaView&gt;
      &lt;/&gt;
  );
}

const styles = StyleSheet.create({
  scrollView: {
    backgroundColor: Colors.lighter,
  },
  engine: {
    position: 'absolute',
    right: 0,
  },
  body: {
    backgroundColor: Colors.white,
  },
  sectionContainer: {
    marginTop: 32,
    paddingHorizontal: 24,
  },
  sectionTitle: {
    fontSize: 24,
    fontWeight: '600',
    color: Colors.black,
  },
  sectionDescription: {
    marginTop: 8,
    fontSize: 18,
    fontWeight: '400',
    color: Colors.dark,
  },
  highlight: {
    fontWeight: '700',
  },
  footer: {
    color: Colors.dark,
    fontSize: 12,
    fontWeight: '600',
    padding: 4,
    paddingRight: 12,
    textAlign: 'right',
  },
});

export default App;

</code></pre><p>We notice that *even when we are offline*, we log in as a user instantly.  When we are offline, it takes about 10 seconds for Firebase to timeout trying to connect to the actual server, and instead 'initialize' offline using the cache.  Data sent when we are offline is cached and makes it online when we get online; data 'sent' when we're offline before the cache is initialized is added to the cache seamlessly. </p><p>This will give the app user an anonymous ID that persists over time; it will log them into that ID even offline; it will allow you to send data even when offline, and will attempt to cache/persist some data when offline; it will give them create access to send data to the firebase server, and read access to their own data, which will be updated to the relevant state as soon as the user is authenticated.</p><p>Attempting to access data without the <code>where("uid","==",user.uid)</code> phrase will give an unathorized error, exactly as we want.</p><p>It's a nice, seamless experience that handles local caching, updating, user sessions, data privacy, and authentication without having to worry about online/offline status ourselves.</p><hr><h2 id="querying-chronologically">Querying Chronologically</h2><p>In order to filter our returned values chronologically and just pull the most recent, we need to set a composite index on our timestamp.</p><p>Go to 'indexes' in Cloud Firestore and add a composite index for 'uid' and 'timestamp'.  We'll use this across collections, so give our collection id 'conditions'.  The error when we ask for this in our javascript code actually generates a link in the error.message that will create this composite index automatically for us, which we can just copy.  This is the preferred method to get things right.</p><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/02/image-4.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2021/02/image-4.png 600w, https://blog.davidbramsay.com/content/images/size/w974/2021/02/image-4.png 974w"></figure><p>Now we can query:</p><pre><code>
    conditionsCollection
    .where("uid","==", user.uid)
    .orderBy("timestamp")
    .get()
    .then(querySnapshot =&gt; {

</code></pre><p>we can also limit our responses with <code>.limit(num)</code>, we should be able to do a 'where' on timestamp before the orderBy statement as well.</p><hr><h2 id="add-a-chart">Add a Chart</h2><p>First we'll install some charting libraries:</p><pre><code>npm install --save react-native-chart-kit react-native-svg
cd ios &amp;&amp; pod install
cd ..
npx react-native run-ios</code></pre><p>Now we'll edit our main App to be as follows:</p><pre><code>/**
 * Sample React Native App
 * https://github.com/facebook/react-native
 *
 * @format
 * @flow strict-local
 */

import React, {useState, useEffect} from 'react';
import {
  SafeAreaView,
  StyleSheet,
  ScrollView,
  View,
  Text,
  StatusBar,
  Button,
} from 'react-native';

import {LineChart} from "react-native-chart-kit";
import { Dimensions } from 'react-native';

import {
  Header,
  LearnMoreLinks,
  Colors,
  DebugInstructions,
  ReloadInstructions,
} from 'react-native/Libraries/NewAppScreen';

import auth from '@react-native-firebase/auth';
import firestore from '@react-native-firebase/firestore';

auth()
  .signInAnonymously()
  .then(() =&gt; {
    console.log('User signed in anonymously');
  })
  .catch(error =&gt; {
    if (error.code === 'auth/operation-not-allowed') {
      console.log('Enable anonymous in your firebase console.');
    }

    console.error(error);
  });

const conditionsCollection = firestore().collection('conditions');
const eventsCollection = firestore().collection('events');


function App() {
  // Set an initializing state whilst Firebase connects
  const [initializing, setInitializing] = useState(true);
  const [user, setUser] = useState();
  const [conditionsArray, setConditionsArray] = useState([]);
  const [eventsArray, setEventsArray] = useState([]);

  // Handle user state changes
  function onAuthStateChanged(user) {
    setUser(user);

    conditionsCollection.where("uid","==", user.uid).orderBy("timestamp").get().then(querySnapshot =&gt; {
      let cArray = [];
      querySnapshot.forEach(doc =&gt; {
        cArray.push(doc.data());
      });

      setConditionsArray(cArray);

      cArray.forEach(el =&gt; {console.log(el);});

      eventsCollection.where("uid","==", user.uid).get().then(querySnapshot =&gt; {
        let eArray = [];
        querySnapshot.forEach(doc =&gt; {
                eArray.push(doc.data());
        });

        setEventsArray(eArray);

        if (initializing) setInitializing(false);

    }, error =&gt; {console.log(error.code + ": " + error.message);});


    }, error =&gt; {console.log(error.code + ": " + error.message);});
  }


  async function addEvent(timestamp, type, data){
    console.log('sending event for user ' + user.uid);

    let eventdoc = {
        uid: user.uid,
        timestamp: timestamp,
        type: type,
        data: data
    };

    setEventsArray([...eventsArray, eventdoc]);
    await eventsCollection.add(eventdoc);

  }

  async function addCondition(timestamp, temp, humd, lux, wlux){
    console.log('sending condition for user ' + user.uid);

    let conditiondoc = {
        uid: user.uid,
        timestamp: timestamp,
        temperature: temp,
        humidity: humd,
        lux: lux,
        whitelux: wlux
    };

    setConditionsArray([...conditionsArray, conditiondoc]);
    await conditionsCollection.add(conditiondoc);
  }

  function addRandomEvent(){
    let ts = firestore.Timestamp.fromDate(new Date());
    addEvent(ts, 'TX_EXAMPLE_TYPE', 2);
  }

  function addRandomCondition(){
    let ts = firestore.Timestamp.fromDate(new Date());
    addCondition(ts, Math.random(), Math.random(), Math.random(), Math.random());
  }

  async function getAllConditions(){
    return await conditionsCollection.get();
  }

  async function getAllEvents(){
    return await eventsCollection.get();
  }

  useEffect(() =&gt; {
    const subscriber = auth().onAuthStateChanged(onAuthStateChanged);
    return subscriber; // unsubscribe on unmount
  }, []);


 const conditionItems = conditionsArray.map((conditions) =&gt;
       &lt;Text key={conditions.timestamp + conditions.uid + conditions.temperature}&gt;
        {conditions.timestamp.toString()} {"\n"} {conditions.temperature} {"\n\n"}
       &lt;/Text&gt;  );

  return (
      &lt;&gt;
      &lt;StatusBar barStyle="dark-content" /&gt;
      &lt;SafeAreaView&gt;
        &lt;ScrollView
          contentInsetAdjustmentBehavior="automatic"
          style={styles.scrollView}&gt;
          &lt;Header /&gt;
          &lt;View style={styles.body}&gt;
            &lt;View style={styles.sectionContainer}&gt;

            {user ?
                &lt;Text&gt;Welcome {user.uid}&lt;/Text&gt; :
                &lt;Text&gt;User not logged in&lt;/Text&gt;
            }
            {initializing ?
                &lt;Text&gt;initializing&lt;/Text&gt; :
                &lt;Text&gt;initialized&lt;/Text&gt;
            }

        &lt;Button
        title="Send Random Condition"
        color="#010101"
        onPress={addRandomCondition.bind(this)}
        /&gt;

        {conditionsArray.length?
        &lt;LineChart data={{
                labels: ['   ' + new Date(conditionsArray[0]['timestamp'].toDate()).toLocaleString()]
                        .concat(Array(2).fill("")
                        .concat([new Date(conditionsArray[conditionsArray.length-1]['timestamp'].toDate()).toLocaleString()])),
                datasets: [
                    {
                    data: conditionsArray.map(el =&gt; {return el['temperature'];}), //[20, 45, 28, 80, 99, 43],
                    color: (opacity = 1) =&gt; `rgba(134, 65, 244, ${opacity})`, // optional
                    strokeWidth: 2
                    }
                ],
                legend: ["Temperature"]
            }}
            width={0.85*Dimensions.get('window').width}
            height={180}
            chartConfig={chartConfig}
            bezier
        /&gt;
        :&lt;Text&gt; no data yet &lt;/Text&gt;}

        {conditionItems}

        &lt;/View&gt;
        &lt;/View&gt;
      &lt;/ScrollView&gt;
      &lt;/SafeAreaView&gt;
      &lt;/&gt;
  );
}

const styles = StyleSheet.create({
  scrollView: {
    backgroundColor: Colors.lighter,
  },
  engine: {
    position: 'absolute',
    right: 0,
  },
  body: {
    backgroundColor: Colors.white,
  },
  sectionContainer: {
    marginTop: 32,
    paddingHorizontal: 24,
  },
  sectionTitle: {
    fontSize: 24,
    fontWeight: '600',
    color: Colors.black,
  },
  sectionDescription: {
    marginTop: 8,
    fontSize: 18,
    fontWeight: '400',
    color: Colors.dark,
  },
  highlight: {
    fontWeight: '700',
  },
  footer: {
    color: Colors.dark,
    fontSize: 12,
    fontWeight: '600',
    padding: 4,
    paddingRight: 12,
    textAlign: 'right',
  },
});

const chartConfig = {
    backgroundColor: '#ffffff',
    backgroundGradientFrom: '#ffffff',
    backgroundGradientTo: '#ffffff',
    labelColor: (opacity = 1) =&gt; `rgba(0, 0, 0, ${opacity})`,
    color: (opacity = 1) =&gt; `rgba(0, 0, 0, ${opacity})`
};

export default App;
</code></pre><p>Notice that we're now using timestamp objects from the firestore library instead of normal javascript datatime objects.  These can easily be moved between– <a href="https://firebase.google.com/docs/reference/js/firebase.firestore.Timestamp">check the documentation.</a>  Our queries are now ordered by timestamp.</p><p>We should see the following app, with all the data also appearing in our firebase console.  Clicking the 'Send Random Condition' will update our list, our chart, our displayed axis timestamp, and send the data to firebase:</p><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2021/02/image-5.png" class="kg-image" alt></figure><p>For the code to this example, see the repo here: <a href="https://github.com/dramsay9/ReactNativeFirebaseTest">https://github.com/dramsay9/ReactNativeFirebaseTest</a>.  Feel free to reuse!</p>]]></content:encoded></item><item><title><![CDATA[Getting Started with STM32WB and BLE Communications]]></title><description><![CDATA[<p>My goal is to get an example running with FreeRTOS and threads that manages a general BLE throughput to an app in the background.  To build the app, I'm using React-Native, the javascript based development environment that can cross-compile.  Let's jump right in!</p><h2 id="running-an-example-on-the-stm32wb-nucleo-board">Running an Example on the STM32WB Nucleo</h2>]]></description><link>https://blog.davidbramsay.com/getting-started-with-stm32wb-and-ble-communications/</link><guid isPermaLink="false">5ef757ecc599f3365f842a25</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Wed, 13 Jan 2021 23:42:12 GMT</pubDate><content:encoded><![CDATA[<p>My goal is to get an example running with FreeRTOS and threads that manages a general BLE throughput to an app in the background.  To build the app, I'm using React-Native, the javascript based development environment that can cross-compile.  Let's jump right in!</p><h2 id="running-an-example-on-the-stm32wb-nucleo-board">Running an Example on the STM32WB Nucleo Board</h2><p>STM32WBCubeIDE and STM32CubeProgrammer are the required/useful software from STM's website. This section falls into four parts: (1) set up the toolchain so you can talk to to the nucleo board, (2) flash the Bluetooth firmware, (3) ensure we can run a BLE pre-compiled example, and (4) import an STM32 example into CubeMX and run a BLE example.  </p><h3 id="toolchain-setup">Toolchain Setup</h3><p>Step 1 is to download <a href="https://www.st.com/en/development-tools/stm32cubeide.html#overview&amp;secondary=st-get-software">STM32CubeIDE</a> and <a href="https://www.st.com/en/development-tools/stm32cubeprog.html">STM32CubeProgrammer</a> – I had previously had a lot of issues running this natively on Mac, but it seems like version 1.3.0 works great natively (Windows support has always been good).  To set up CubeProgrammer, you may have to download the <a href="https://www.oracle.com/java/technologies/javase/javase8-archive-downloads.html#jre-8u160-oth-JPR">Java JDK </a>(one with JavaFX, aka version 8– if you've installed a more recent one [ <code>java -version</code> in terminal to check] you can simply delete the folder from <code>/Library/Java/JavaVirtualmachines</code>).  This can be installed with brew on Mac OSX:</p><pre><code>#if you don't have brew, get it with this command
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew update

brew tap AdoptOpenJDK/openjdk
brew cask install adoptopenjdk8</code></pre><p>You also may actually have to show the Package Contents and click on the setup executable manually.  You'll also need the ST-Link driver, installed with homebrew: <code>brew install stlink</code>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/06/image-1.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/06/image-1.png 600w, https://blog.davidbramsay.com/content/images/size/w807/2020/06/image-1.png 807w"><figcaption>click on this if clicking on the Application File doesn't work.</figcaption></figure><p>The Cube Programmer interface is a new tool, and it's actually incredibly useful for checking/setting fuses, downloading firmware and programs, and reading/writing memory.  It also works over the ST-Link, SWD, and DFU interfaces, so it's a great one-stop-shop for interfacing with the STM32.</p><p>The easiest first step is to connect your USB to the STM32 Nucleo board ST_LINK port (closer to the side/header).  Of course, the Nucleo board has 2 STM32s on it– the main WB chip we'll be talking about the rest of the time, and a secondary STM32 chip that serves as an ST-LINK device for programming the WB.  The programmer chip firmware should be updated first.  </p><p>When you connect the STM32, you should see some flashing LEDs and a new device should appear in your Finder/File Explorer window.  If we open the STM32 CubeProgrammer, we should see a Serial number if we refresh the right sidebar with 'ST-LINK' selected at the very top.  Let's click Firmware Upgrade (again on the right) to update the included programmer on the Nucleo Board.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/06/image.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/06/image.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2020/06/image.png 1000w, https://blog.davidbramsay.com/content/images/size/w1197/2020/06/image.png 1197w"><figcaption>Upgrade your ST-Link using the blue 'Firmware Upgrade' button on the bottom right, which pops up this dialog box.</figcaption></figure><p>You'll have to click 'Open in Update Mode' and the 'Upgrade' to actually update the programmer.</p><h3 id="flashing-the-wireless-co-processor-firmware">Flashing the Wireless Co-processor Firmware</h3><p>This is a two step process– we need to upgrade the underlying firmware on the coprocessor first and then flash the wireless stack we want to use (BLE in this case).</p><p>First, we should download the <a href="https://github.com/STMicroelectronics/STM32CubeWB">firmware pack from github</a>.  Within it we find the binaries for the wireless co-process in:</p><p><code>STM32CubeWB/Projects/STM32WB_Copro_Wireless_Binaries/STM32WB5x</code> </p><p>You'll also find <code>Release Notes.html</code> in that folder, which gives us the memory addresses we need to use for each binary in a table towards the bottom called <strong>Firmware Upgrade Services Binary Table </strong>for our first step and <strong>Wireless Coprocessor Binary Table </strong>for our second step.  We can see from this table that for the Nucleo, which is designed around the STM32WB55RG, the FUS firmware belongs at location <code>0x080EC000</code> and the stm32wb5x_BLE_Stack_full_fw should be flashed to <code>0x080CB000</code>. </p><hr><p><em><strong>Option 1 (preferred): Use ST-Link</strong></em></p><p>On the Nucleo board, you should be able to flash the firmware using the ST-Link. Given the same setup in STM32CubeProgrammer as above, you should be able to connect to the board simply by clicking the big green 'Connect' button.  Make sure it's in Normal/Software Reset mode– you may need to hold down the Reset button on the board, hit Connect in the software, and immediately release the Reset button to get it to work. </p><p><em><strong>Option 2: Use DFU mode</strong></em></p><p>If the above is giving you any trouble, we can also program the board in DFU mode.  We have to set up the jumpers on board to enable DFU mode by moving JP1 to USB_MCU and connecting pins 5 and 7 on CN7 (<a href="https://visualgdb.com/tools/STM32WBUpdater/connecting/">as shown here</a>).  We need to make sure we have the proper DFU drivers installed; on OSX, <code>brew install dfu-util</code> should do it; on Windows, the DFU drivers should be installed in <code>C:\Program Files (x86)\STMicroelectronics\STM32Cube\STM32CubeProgrammer\Drivers\DFU_Driver</code>, where you can click on <code>STM32Bootloader.bat</code>.  It's also possible to get <a href="https://my.st.com/content/my_st_com/en/products/development-tools/software-development-tools/stm32-software-development-tools/stm32-programmers/stsw-stm32080.license=1593427888500.product=STSW-STM32080.version=3.0.6.html">Windows DFU drivers from STM directly</a>.  Now If you plug it in to your computer from the USB_USER port, it should appear as a DFU device on your system and be accessible under the USB tab within CubeProgrammer.  On Windows, the final thing worth considering if none of this is working is <a href="https://zadig.akeo.ie/">zadig libusbK conversion</a> of existing drivers. </p><p>This has worked for me with Windows– I haven't tested in on OSX yet.  People report (including myself) difficulty re-connecting over DFU once a firmware is flashed, so hopefully option 1 is working for you.  If you really need DFU, some people have reported success by wiping the memory using the ST-Link and CubeProgrammer (use the Erasing and Programming screen and click Erase Selected Sectors after selecting all).  You could also toggle the RDP bit under "OB" option bites (to 0xBB and then 0xAA), which will clear some bits of memory (make sure PCROP_RDP is checked under PCROP Options).</p><p>For custom boards, to get into DFU mode simply requires BOOT0 gets pulled to VDD(3.3), and the USB socket to connect GND to GND, D- to PA11, and D+ to PA12.</p><hr><p>Now that we are ideally connected to the chip using CubeProgrammer (either over the ST-Link or over DFU), we can go ahead and flash the firmware.  </p><p>We're first going to delete the existing firmware.  As of this post, there are three firmware versions (0.5.3, 1.0.2, and 1.1.0) and they must be upgraded in order (you can't jump to 1.1.0 from 0.5.3 without install 1.0.2)– our next step will be to upgrade firmware versions in order.  Finally, we'll flash the BLE firmware.</p><p> We can get to the Firmware Upgrade Service (FUS) by clicking the wifi-looking icon on the left menu; first click 'delete firmware'.  After that operation completes, we'll flash FW 1.0.2 to address <code>0x080EC000</code>:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/06/image-2.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/06/image-2.png 600w, https://blog.davidbramsay.com/content/images/size/w614/2020/06/image-2.png 614w"><figcaption>Click on the Wifi-looking icon to get to the FUS upgrade screen. First we'll attempt to update to 1.0.2.</figcaption></figure><p>You'll probably see a lot of FUS_STATE_IMG_NOT_AUTHENTIC warnings– I certainly have while going through these processes.  This could mean you're trying to upgrade the firmware to an image that isn't allowed from the current firmware (like 'upgrading' to the same image, or skipping firmware version 1.0.2) or that something is going wrong with the upgrade process.  Try it a couple times if you get this warning; if it persists, file it away and go on to the next step.  If that fails too you can come back to this.</p><p>We can then try and upgrade again to the latest version of the firmware, at the same memory address:</p><figure class="kg-card kg-image-card"><img src="https://blog.davidbramsay.com/content/images/2020/06/image-3.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/06/image-3.png 600w, https://blog.davidbramsay.com/content/images/size/w614/2020/06/image-3.png 614w"></figure><p>Finally, we install the BLE full stack at <code>0x080CB000</code> with 'first install' checked, repeating the process above.</p><hr><p>As a note, it is also possible to do this from the command line with the tools installed alongside CubeProgrammer.</p><p>For windows, we need to add <code>C:\Program Files (x86)\STMicroelectronics\STM32Cube\STM32CubeProgrammer\bin</code> to our path; the way to do this is to search for 'env' from the windows toolbar, click 'Environmental Variables', click path in the first dialog, and add the above folder.  </p><p>For OSX we can add the path of our application <code>/Applications/STMicroelectronics/STM32Cube/STM32CubeProgrammer/STM32CubeProgrammer.app/Contents/MacOs/bin</code> using a typical export command: </p><figure class="kg-card kg-code-card"><pre><code>export PATH=$PATH:/Applications/STMicroelectronics/STM32Cube/STM32CubeProgrammer/STM32CubeProgrammer.app/Contents/MacOs/bin</code></pre><figcaption>This can of course be added to your ~/.bashrc to keep the commands accessible.</figcaption></figure><p>The command <code>STM32_Programmer_CLI</code> in OSX terminal or <code>STM32_Programmer_CLI.exe</code> in Windows cmd<strong> </strong>should both work; the port is specified as <code>port=usb1</code> for DFU connected devices, or <code>port=/dev/tty.usbmodem&lt;XXXX&gt;</code> for OSX and <code>port=COM&lt;X&gt;</code> for Windows over ST-Link.</p><p>Some example commands that have worked for me to flash the firmware over DFU in Windows are shown here:</p><pre><code># first, move to the folder with the coprocessor binaries

STM32_Programmer_CLI.exe -c port=usb1 -fwdelete
STM32_Programmer_CLI.exe -c port=usb1 -r32 0x20030030 1


# IF the above says 00050300 it's at FUSv0.5.3. This MUST be first updated to v1.0.2 for any STM32WB5xx before you can update further (the latest is 1.1.0).  This is our prototype command: STM32_Programmer_CLI.exe -c port=usb1 -fwupgrade[FUS_Binary] [Install@] firstinstall=0, where Release Notes gives us Install@ parameter depending of the binary.  For STM32WB5xxG, as with Nucleo, this is:

STM32_Programmer_CLI.exe -c port=usb1 -fwupgrade stm32wb5x_FUS_fw_1_0_2.bin 0x080EC000 firstinstall=0


#now if we run the above command again we should see a new firmware version:

STM32_Programmer_CLI.exe -c port=usb1 -r32 0x20030030 1


Now I'm going to upgrade it again to the latest firmware (note difference in binary name):

STM32_Programmer_CLI.exe -c port=usb1 -fwupgrade stm32wb5x_FUS_fw.bin 0x080EC000 firstinstall=0

# This gives 'firmware not authentic error', no matter how I try to do the upgrade.  We're fine with just 1.0.2 though, so we'll ignore this issue.  Now we'll update the actual binary with the full BLE stack:

STM32_Programmer_CLI.exe -c port=usb1 -fwupgrade stm32wb5x_BLE_Stack_full_fw.bin 0x080CB000 firstinstall=0</code></pre><p><strong>NOTE: If your Nucleo board is giving you a 'no device found on target' error, make sure it is in 'Normal'/'Software Reset' mode, HOLD DOWN the reset button, initiate the programming, and THEN release the reset button once the ST-Link connection is 'WAITING FOR DEBUGGER CONNECTION'.</strong></p><h3 id="download-and-run-a-pre-compiled-example">Download and Run a Pre-compiled Example</h3><p>To make sure the BLE example is working properly out of the box, it's best to use the BLE_HeartRate binary and flash it directly to the board, and check it using the STM32 app downloaded from the iOS store.  </p><p>We can again use Cube Programmer with <code>/Projects/P-NUCLEO-WB55.Nucleo/Applications/BLE/BLE_HeartRate/Binary/BLE_HeartRate_reference.hex</code> from the second tab, 'Erasing and Programming'.  Simply pick the file, enter a start address of <code>0x08000000</code>, and click 'Start Programming'.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/07/image-3.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/07/image-3.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2020/07/image-3.png 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2020/07/image-3.png 1600w, https://blog.davidbramsay.com/content/images/size/w1876/2020/07/image-3.png 1876w"><figcaption>Automatic mode is cool – when we have made several custom boards, it will just automatically download our code as soon as the ST-LINK detects a target board! Great for fast programming of small runs.f</figcaption></figure><p>Now we install the iOS App 'ST BLE Sensor App':</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/07/image-4.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/07/image-4.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2020/07/image-4.png 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2020/07/image-4.png 1600w, https://blog.davidbramsay.com/content/images/size/w1920/2020/07/image-4.png 1920w"><figcaption>(1). Download the ST BLE Sensor App. (2) We click connect to device and the magnifying glass, and we find our STM32! (3) Clicking on it connects us to the HR app and we see streaming data.</figcaption></figure><h3 id="download-and-run-an-example-from-cubeide">Download and Run an Example from CubeIDE</h3><p>Now that we know we have the working BLE firmware and working code, and a way to test the STM32 example, let's try to import an example into CubeIDE where we can edit the C code and then download it to the Nucleo, and see if we get the same results.</p><p>For this example, we'll use the same BLE_HeartRate project, except now it will be the uncompiled C code we can then edit.  </p><p>Select <code>File-&gt;Open Projects from File System</code>, then hit the directory button and select the <code>Projects\P-NUCLEO-WB55.Nucleo\Applications\BLE</code> folder.  Click <strong>Deselect All</strong>, and then choose <code>BLE\BLE_HeartRate\STM32\BLE_HeartRate</code>, which is a System Workbench project.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/07/image-16.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/07/image-16.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2020/07/image-16.png 1000w, https://blog.davidbramsay.com/content/images/size/w1517/2020/07/image-16.png 1517w"><figcaption>Selecting the BLE Example Project</figcaption></figure><p>If you hit 'okay' through the next few dialogs, you should have a project that will build and debug, connecting once again to the app!  Make sure that your Nucleo board connects, that you can't see the HRSTM, and that it appears once you 'continue' the debug after programming.</p><p>Honestly, the OSX version of things doesn't work easily for this, and I'm not sure why– I get hard faults when I attempt to build/debug.  Mac support has slowly been improving, but for this I'd recommend virtualizing Windows instead with CubeIDE.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/07/image-7.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/07/image-7.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2020/07/image-7.png 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2020/07/image-7.png 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2020/07/image-7.png 2400w"><figcaption>The OSX CubeIDE kept giving me hard faults after many attempts to import the same example :(</figcaption></figure><p>There are a surprising number of ways to get hung up at this step, and imported projects aren't structured as nicely as CubeIDE-native IOC-based projects.  There also appear to be some limitations with using BLE and FreeRTOS at the same time if you stick with the standard toolchain CubeMX and CubeIDE toolchain.</p><h2 id="creating-an-example-app-that-receives-ble-from-the-stm32">Creating an Example App that Receives BLE from the STM32</h2><p>React-Native makes things nice and easy to develop across platforms.  Expo is an environment that will let you write and hot-reload code from your browser and push changes to the app store; it's a really great tool.  </p><p>Unfortunately Expo doesn't support BLE.  Also unfortunately, you can't test your code in the iphone simulator included in XCode; they don't have support for interfacing with bluetooth on your Apple computer.  That means we need to do our testing live on a real iPhone, and we need a developer account to push things to a real phone.</p><p>To do this we need a developer account, XCode, react-native, and polidea's ble plx react native library.</p><h3 id="install-the-tools">Install the Tools</h3><p>Install <a href="https://apps.apple.com/us/app/xcode/id497799835?mt=12">Xcode from the App Store</a> (or older versions from <a href="https://developer.apple.com/download/more/">here</a> if the latest version isn't compatible with your OS version and you don't want to upgrade.)  </p><p>From a terminal install xcode command line tools with <code>xcode-select --install</code>.  You also have to accept the xcode license with <code>sudo xcodebuild -license</code>.  You should also set up a developer account; with Xcode open, click <strong>Xcode -&gt; Preferences</strong>.  Go to the <strong>Accounts</strong> tab, and add your developer account (which you might have to <a href="https://developer.apple.com/programs/enroll/">register for if you don't have one</a>).  Click <strong>manage certificates</strong> and add one using the add button at the bottom– this will give you a certificate for this computer.</p><p>Now we need watchman, flow, nvm, and the react native command line interface:</p><pre><code>brew install watchman nvm flow
echo "source $(brew  --prefix nvm)/nvm.sh" &gt;&gt; ~/.bash_profile
source ~/.bash_profile
nvm install node &amp;&amp; nvm alias default node
npm install -g react-native-cli</code></pre><h3 id="set-up-the-project">Set up the Project</h3><pre><code>#create a project
npx react-native init ReactNativeBLETest
#(install cocoapods with Homebrew if prompted)

#add the ble pod and generate
cd ReactNativeBLETest
npm install --save react-native-ble-plx
sudo xcode-select --switch /Applications/Xcode.app
cd ios &amp;&amp; pod install</code></pre><p>Now we can open this is Xcode– <strong>click on the .xcworkspace file to open it</strong>, <strong>NOT on the xcodeproj file</strong> (it won't build and you'll receive a <em>cocoapod modulemap not found</em> error otherwise)!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/07/image-21.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/07/image-21.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2020/07/image-21.png 1000w, https://blog.davidbramsay.com/content/images/size/w1538/2020/07/image-21.png 1538w"><figcaption>Open the Xcode <strong>Workspace</strong> from the ios subdirectory. (<strong>Not the xcodeproj</strong>!)</figcaption></figure><p>Open the main project folder and the Info.plist; right click in the area under the rows, click <strong>Add Row</strong>, and then scroll down to find '<strong>Privacy– Bluetooth Always Usage Description</strong>'.  No need to fill in the description string.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/07/image-18.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/07/image-18.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2020/07/image-18.png 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2020/07/image-18.png 1600w, https://blog.davidbramsay.com/content/images/size/w1740/2020/07/image-18.png 1740w"><figcaption>See the last row here, which we added.</figcaption></figure><p>Next we select the high level project (the one with the blue icon in the left project explorer) and click <strong>Signing and Capabilities</strong>.  Here we select our developer account from the dropdown menu (so we can sign it and test on our phone), and we also click the '<strong>+ Capability</strong>' button along the top, and select background modes to reveal an additional menu.  From this menu we can select '<strong>Uses BLE accessories' </strong>as seen below:	</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/07/image-20.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/07/image-20.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2020/07/image-20.png 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2020/07/image-20.png 1600w, https://blog.davidbramsay.com/content/images/size/w1722/2020/07/image-20.png 1722w"><figcaption>Signing and background BLE capabilities. You can also see that we've selected our actual iPhone as the target (along the top bar where it says <strong>[ ReactNativeBLETest &gt; iPhone (2) ]</strong>).</figcaption></figure><p>Now we'll set our target to our actual iPhone.  Plug in the phone with a cable to the computer, and in the main Xcode menu select <strong>Product -&gt; Destination -&gt; iPhone </strong><em>(under Device).  </em>You should also be able to select it from the top bar of Xcode.  </p><p><strong>Make sure your iPhone is connected to the same WIFI network as your computer.  </strong>Now let's hit the play button and see if it builds and loads onto our phone!</p><p>What should happen is that a terminal opens with the React logo.  At the same time, to start debugging, you should open a Chrome/Firefox window, navigate to <code>localhost:8081/debugger-ui/</code>, and open the developer tools console with<strong> cmd+option+I.  </strong>After you wait a minute or two, the example app should appear on your phone.  Shake it, and hit '<strong>Debug</strong>'.  We should be able to reload the app (again from the Shake menu) after refreshing the debugger site and see the same logging in our browser console that we see below. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/07/image-22.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/07/image-22.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2020/07/image-22.png 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2020/07/image-22.png 1600w, https://blog.davidbramsay.com/content/images/size/w2400/2020/07/image-22.png 2400w"><figcaption>The React-Native terminal opens automatically, and the debugger UI shows some initial data from the actual app.</figcaption></figure><p><strong>Troubleshooting: </strong>if you get a modulemap error, you probably didn't open the project from the <strong>.xcworkspace</strong> file.  Do that instead.</p><p>If you get a '<em>Could Not Locate Device Support Files</em>' error, you have an old iPhone like me.  You'll need to <code>git clone <a href="https://github.com/iGhibli/iOS-DeviceSupport.git">https://github.com/iGhibli/iOS-DeviceSupport.git</a></code>  and then enter the folder and run <code>.sudo ./deploy.py</code> and restart Xcode.</p><h3 id="add-and-test-ble-functionality">Add and Test BLE Functionality</h3><p>We'll start by verifying we can accept data over BLE from the heart rate example we already have running on the Nucleo board.  First, double check that you can still see the nucleo board as HRSTM within the official STM32 app.</p><p>In our main, top-level project directory, we'll add a <strong>BLE.js </strong>file alongside the App.js.  Within BLE.js we'll add this basic code:</p><figure class="kg-card kg-code-card"><pre><code>//BLE.js

import React, { Component }  from 'react';
import {
  View,
  Text,
} from 'react-native';

import { BleManager } from "react-native-ble-plx";

export default class BLE extends Component {

  constructor() {
    super()
    this.manager = new BleManager()
    this.state = {info: "", values: {}}
    this.ble_devices = {};
  }

  info(message) {
    this.setState({info: message})
  }

  error(message) {
    this.setState({info: "ERROR: " + message})
  }
  
  updateValue(key, value) {
    hexval = this.base64ToHex(value);
    console.log('update ' + key + ' : ' + hexval)
    this.setState({values: {...this.state.values, [key]: hexval}})
  }

  base64ToHex(str) {
    const raw = atob(str);
    let result = '';
    for (let i = 0; i &lt; raw.length; i++) {
        const hex = raw.charCodeAt(i).toString(16);
        result += (hex.length === 2 ? hex : '0' + hex);
    }
    return result.toUpperCase();
  }

  componentDidMount() {
    if (Platform.OS === 'ios') {
      this.manager.onStateChange((state) =&gt; {
        if (state === 'PoweredOn') this.scanAndConnect()
      })
    } else {
      this.scanAndConnect()
    }
  }

  scanAndConnect() {
    this.manager.startDeviceScan(null,
                                 null, (error, device) =&gt; {
      this.info("Scanning...")
      console.log(device)

      if (error) {
        this.error(error.message)
        return
      }

      this.ble_devices[device.id] = {
            'name': device.name,
            'rssi': device.rssi
      }

      if (device.name === 'HRSTM') {
        this.info("connecting to HRSTM")
        this.manager.stopDeviceScan()
        device.connect()
          .then((device) =&gt; {
            this.info("Discovering services and characteristics")
            let r = device.discoverAllServicesAndCharacteristics()
            console.log(r)
            return r
          })
          .then((device) =&gt; {
            console.log('services')
            device.services()
              .then((services) =&gt; {
                  console.log(services)
                  console.log('characteristics')
                  for (s in services){
                      console.log(services[s])
              device.characteristicsForService(services[s].uuid).then((c)=&gt; {
                          for (i in c){
                              console.log(c[i])
                              if (c[i].isNotifiable){
                                  console.log('registering notifiable!!')
             device.monitorCharacteristicForService(c[i].serviceUUID, c[i].uuid, (error, characteristic) =&gt; {
                                      if (error) {
                                        this.error(error.message)
                                        return
                                      }
                                      this.updateValue(characteristic.uuid, characteristic.value)
                                  });
                              }
                          }
                      })
                }
            })
          })
          .then(() =&gt; {
              this.info("Listening")
          }, (error) =&gt; {
              this.error(error.message)
              this.info(error.message)
          })
      }
    })
  }

  render() {
    return (
      &lt;View&gt;
        &lt;Text&gt;{this.state.info}&lt;/Text&gt;
        {Object.keys(this.ble_devices).map((key) =&gt; {
            return &lt;View key={key}&gt;
                &lt;Text style={{fontWeight:'bold',color:'red'}}&gt;
                    {this.ble_devices[key]['name'] + ' : ' + this.ble_devices[key]['rssi']}
                &lt;/Text&gt;
                &lt;Text key={key}&gt;
                {key}
                &lt;/Text&gt;
                &lt;/View&gt;
        })}

        {Object.keys(this.state.values).map((key) =&gt; {
          return &lt;Text key={key}&gt;
                   {"\n" + key + ": " + (this.state.values[key])}
                 &lt;/Text&gt;
        })}
      &lt;/View&gt;
    )
  }
};</code></pre><figcaption>BLE Example Code that will scan and enumerate devices and values for the BLE heart rate example.</figcaption></figure><p>After creating this component, we can integrate them into the 'getting started' app by editing the App.js file in the top level directory.  We first add a reference to our BLE component at the top of the file:  <code>import BLE from './BLE';</code>.  Secondly we add the BLE component to our main view in App.js– in the line just below the <code>&lt;Header /&gt;</code> we add <code>&lt;BLE /&gt;</code>.  </p><p>Now we'll go back to our app, shake, and click reload<em> </em>(or go to our react terminal and hit '<strong>r</strong>').  And voila!  BLE devices are scanned, HRSTM is connected to, its services and characteristics are enumerated, and it registers a streaming service to write data to the screen.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/07/image-23.png" class="kg-image" alt><figcaption>You should now have an app that looks like this! Congrats, it's streaming HR data over BLE and printing values to the screen (that update string at the end!)</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/07/image-24.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/07/image-24.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2020/07/image-24.png 1000w, https://blog.davidbramsay.com/content/images/size/w1600/2020/07/image-24.png 1600w, https://blog.davidbramsay.com/content/images/size/w2240/2020/07/image-24.png 2240w"><figcaption>You should also see Services and Characteristics registered and printed in the debugger console, alongside updates for the HR value!</figcaption></figure><h2 id="digging-into-bluetooth">Digging Into Bluetooth</h2><p>Let's review Bluetooth, and how the communication works on the STM32WB. </p><h3 id="how-bluetooth-works">How Bluetooth Works</h3><p>Bluetooth devices have a MAC address or similar, certified 12-digit hex address (commonly <strong>BD_ADDR</strong>).  To talk over bluetooth, we have a discovery process, a negotiation, and then a connection in <em>active</em> (ongoing)<em>, sniff </em>(interval)<em>, hold </em>(predefined sleep)<em>,</em> or <em>park </em>(sleep until master commands wake)<em> </em>modes.  If devices are <em>paired</em> once, they will then be <em>bonded</em> and automatically establish a connection.  The pairing process varies; it can just work, require PINs, codes etc.</p><p>Bluetooth works with one master and (potentially many) slaves.</p><p>Bluetooth devices implement different <em>profiles</em>.  <strong>HID</strong> (human interface device) is a common one for input devices; <strong>SPP</strong> (Serial Port Profile) is good for replacing serial comms (data bursts), and there are others for audio (<strong>A2DP</strong>, APTX).  </p><p>Bluetooth classes refer to range; 1=100m, 2=20m, and 3=5m.</p><p>We have Bluetooth 2.1 and 3, where the max basic speed is ~2.1Mbps (though 3 introduced a high speed mode capable of 24Mbps).  Bluetooth 4 introduces the categories of Classic, High Speed (HS), and Low Energy or Smart (BLE).  From now on I'll call Bluetooth Classic 'Bluetooth', as it really matches the previous 2.1/3 specs.  HS is once again 24Mbps (This is actually more like WIFI on the physical layer– normally we think of bluetooth as a quickly changing frequency hopping carrier in the 2.4GHz band, but the 24Mbps uses a standard wide-band frequency division technique).</p><h3 id="gap">GAP</h3><p><strong>GAP</strong> (Generic Access Profile) is how a Bluetooth device advertises and connects.  GAP defines the role for a device as central or peripheral, and controls the <em>advertising data </em>and <em>scan response </em>packets.  Advertising packets are mandatory and sent out at intervals; scan responses are optional to provide a bit more information to scanning devices in the discovery phase.  Both packets contain up to 31 bytes of data.  Advertising intervals are on the order of tens of ms to seconds.</p><p>This advertising process is usually meant to establish connections, but it can be hijacked to simply advertise data to anyone around in the 31 byte payload– this is called <em>Broadcasting</em> in BLE.  Advertising packets are structured as:</p><p></p><p><strong>				Preamble | Access Address | PDU | CRC | CTE</strong></p><p>The <strong>preamble</strong> is 1 byte of alternating 0s and 1s, for synchronization.</p><p>The <strong>access address </strong>is a 6 byte value that is unique to that type ofadvertisement. For BLE, that address is always <strong><em>0x8E89BED6</em></strong>.</p><p>the <strong>CRC </strong>is a 3 byte cyclic redundancy check (error detection), and the <strong>CTE </strong>is a small 16-160 us burst of a '1' value known as a continuous tone extension, sent 250 kHz above the main carrier frequency, to measure transmission path (IQ) quality.</p><p> The <strong>PDU,</strong> or protocol data packet is 2-258 bytes, and is broken down as follows:</p><p></p><p><strong>												Header | Payload</strong></p><p>Where the Header is 16 bits that break down into:</p><p></p><p><strong>					PDU Type | RFU | ChSel | RFU | ChSel | Length |</strong></p><p><strong>PDU type </strong> is a 4 bit number, most frequently <strong>ADV_IND,</strong> or <em>0b0000</em>.  This type describes a connectable device that is advertising itself to any available central.</p><p><strong>RFU </strong>is 1 byte reserved for future use; <strong>ChSel</strong> is a byte that represents some information about Tx and Rx channels depending on the PDU Type.</p><p><strong>Length</strong> is an 8 bit number that tells the number of bytes in the payload.</p><p>For ADV_IND Advertising PDU types, the Payload looks like:</p><p></p><p><strong>												AdvA | AdvData</strong></p><p>Where <strong>AdvA </strong>is the device's own 6 byte <strong>BD_ADDR </strong>(or MAC address), and <strong>AdvData</strong> is a 0-31 byte that of repeating units that follow the structure:</p><p></p><p>									<strong>AD Length | AD Type | AD Data</strong></p><p>Where <strong>AD Length </strong>is 1 byte and defines the length of <strong>AD Data </strong>in bytes (up to 29); and <strong>AD Type</strong> is a 1 byte value defined by the spec.  The payload typically includes a <strong>Device Name </strong>that is user friendly (AD Type=0x09, up to 248 bytes in UTF-8) and the <strong>Service UUIDs</strong> of services advertised on this device (AD Type=0x07).  </p><p>Because the payload may not be long enough to include everything we need (a single UUID will take 16 of the 31 possible bytes in the <strong>AdvData</strong> section of the packet), we might require the <em>Scan Response </em>feature to send multiple packets in a row with all of the data we care to share. </p><h3 id="gatt">GATT</h3><p><strong>GATT</strong> (Generic Attribute Profile) is the abstract, general implementation of a BLE profiles.  Specific profiles like the Heart Rate Profile or the Pulse Oximeter Profile simply define the behavior and communication patterns between a peripheral and a master device.</p><p>Any GATT or GATT-style profile structures data using the <strong>Attribute Protocol </strong>– a lookup table with four columns:</p><ol><li> a 16-bit index known as the <em><strong>handle</strong></em> (0x0001- 0xFFFF), which is guaranteed not to change for a given GATT Server.</li><li>a universally unique identifier, or <strong>UUID,</strong> which describes the attribute type, which is a 128-bit number, but if it is one of the predefined ones coming from the bluetooth spec it can be described in 16-bits (i.e. 0x180F  (battery service), which are 'inserted' into the standard bluetooth base 128-bit frame.  For the ones we'll use, we'll only need the 16 bit version.  For non-custom UUIDs, this will always look like <strong>0000xxxx-0000-1000-8000-00805F9B34FB</strong>,<strong> </strong>with the x's replaced by the UUID of the attribute of interest,</li><li>a  <strong>value</strong>, which is variable in length and format depending on the UUID that maxes out at 512 bytes.  This value can be indexed and contain multiple pieces of arbitrarily composed data, and </li><li>a set of <strong>permissions</strong> for access type, encryption, and authorization  </li></ol><p>The Attribute Protocol defines the peripheral as a <strong>GATT Server</strong>; the master is the <strong>GATT Client</strong>.  Typically, a peripheral will suggest an interval at which it would like to be polled; however it's up to the master how frequently these requests are actually initiated.  These Client initated commands can be a <em><strong>Read</strong></em>, a <em><strong>Write without Notification</strong></em>, or a <em><strong>Write</strong></em>.  The Client may also put the Server in <em><strong>Notification</strong> </em>mode (the peripheral will push data to the Client and expect an acknowledgement) or in <em><strong>Indication</strong> </em>mode (the peripheral will push data and expect no acknowledgement).    In all cases, the Client controls and initiates communication.</p><p>For BLE devices, the <strong>Profile</strong> is a collection of <strong>Services</strong> that each contain <strong>Characteristics, </strong>some of which contain <strong>Descriptors</strong>.  We can use the general GATT profile, or other official GATT-based profiles like the HR profile, which follows the same rules and structure (but invokes specific services).  All services, characteristics, and descriptors have a UUID that defines their type.</p><p>For services and characteristics, it's typical to have an initial attribute <em><strong>declaration </strong></em>that is read-only and describes the layout of the data.  There is a UUID for 'SERVICE_DECLARATION' (0x2800) that starts every service; it simply contains the service UUID of the service that is about to follow.  <em>The Service UUID does not show up in the UUID field</em>; it's just a value in for the service declaration UUID.</p><p>The 'CHARACTERISTIC_DECLARATION' (UUID=0x2803) which is at the beginning of every characteristic defines the properties (Write/Read/Notify etc), the handle (the address in the look up table), and the UUID for a characteristic.  Obviously, in this case, the characteristic UUID <em>does</em> appear in the UUID field of the attribute located at that handle, along with the actual data.</p><p>After the characteristic, a descriptor, service, or characteristic declaration must follow.  Descriptor UUIDs specify the data it contains, so it simply contains metadata related to the preceding characteristic.  It <em>does not</em> point to another entry.  The R/W <strong>Client Characteristic Configuration Descriptor (CCCD)</strong>, for instance, is required for any characteristic that can <em>Notify</em> or <em>Indicate </em>(pro-actively push data to the main device), as this behavior must be turned on and off by the Client.  The other properties (<em>Broadcast, Read, Write without response, Write</em>) are indicated in the Characteristic declaration and don't require a descriptor. </p><hr><p>As a quick grounding example, the <em><strong>HR Profile</strong></em> defines the 'sensor' as a Server composed of the <em>HR Service</em> and the <em>Device Information Service, </em>both mandatory. It also specifies a client that must support collecting data from the HR service, but <em>optionally </em>implements the <em>Device Information Service.</em>  It describes optimal settings for advertising and connection of unbonded and bonded devices.</p><p>the <em><strong>HR Service</strong></em> is composed of mandatory HR Measurement Characteristic and HR Measurement CCCD, where the HR Measurment can Notify and the Configuration is R/W.  It has an optional readable Body Sensor Location Characteristic and a writable HR Control Point characteristic.</p><p>The <em><strong>HR Measurement Characteristic</strong> </em>contains within its value: (1) a Flags Field [that includes value format UINT8/UINT16, skin contact status DETECTED/UNDETECTED, energy expenditure INCLUDED/UNINCLUDED (relies on HR Control Point), RR-interval INCLUDED/UNINCLUDED], (2) a HR Measurement value Field (UNIT8/16 depending on flag), (3) an Energy Expended Field (UINT16), and (4) an RR-interval field.</p><p>The <em><strong>HR Measurement CCCD</strong></em> is a simple flag that allows the Client to control whether the Service is actively notifying it (pushing data) or is turned off.</p><p>This Service in the ATT structure would look like a table composed of the following rows (ignoring the handle indices and the permissions):</p><ul><li>UUID=SERV_DELARE_UUID, val=HR_SERVICE_UUID</li><li>UUID=CHAR_DECLARE_UUID, val= HR_CHAR_UUID/props/handle</li><li>UUID=HR_CHAR_UUID, val=HR data</li><li>UUID=CCCD_UUID, val=Notify on/off</li></ul><p>Any service with one mandatory notification characteristic will have the same structure, and the exact same UUID values for rows 1,2, and 4.</p><h2 id="the-basics-of-ble-on-the-stm32wb">The Basics of BLE on the STM32WB</h2><h3 id="background">Background</h3><p>As we know and have seen, the STM32WB has a separate co-processor to handle wireless communication, and we flash a binary to it that completely hides and abstracts everything that is happening on board.  To issue commands and set up our BLE peripheral, we have to use the <strong>Inter Processor Communication Controller (IPCC)</strong> or 'Mailbox'.  Let's dig into what's going on; <a href="https://www.st.com/resource/en/application_note/dm00598033-building-wireless-applications-with-stm32wb-series-microcontrollers-stmicroelectronics.pdf">AN5289</a> and <a href="https://www.st.com/resource/en/application_note/dm00571230-stm32wbx5-bluetooth-low-energy-ble-wireless-interface-stmicroelectronics.pdf">AN5270</a> are good references for this.  (<em>A note, they use the term 'IP' in some of these documents to refer to peripherals or other functional units on board the chip; this comes from the use of <a href="https://en.wikipedia.org/wiki/Semiconductor_intellectual_property_core">'intellectual property' in the semiconductor industry</a> covering reusable soft and hard logicical 'cores' that do some sort of processing, and make up the complete chip design.</em>)</p><p>CPU2 runs the BLE firmware and controls the physical and link layer (up to and including GAP/GATT); CPU1 needs a BLE host stack alongside our application. Shared peripherals between the two are protected by semaphores– these include Sem0 for the <strong>RNG</strong> (Random Number Generator, it is recommended to generate a startup pool of these), Sem1 for the <strong>PKA</strong> (Public Key Algorithm), Sem2/6/7 for FLASH protection, Sem3/4/5 for the <strong>RCC</strong> (Reset and Clock Control, which also matters for power states).</p><hr><p>CPU2 executes registers in the '<strong>sequencer</strong>', up to 32 functions (and execution can be interrupted), and when no functions are present it goes into a low power state.  To use it, we have to do a few things:</p><pre><code>//set max # supported functions
UTIL_SEQ_CONF_TASK_NBR = 32 

//register a func to be used by the sequencer
UTIL_SEQ_RegTask()

//start the sequencer in the background
UTIL_SEQ_Run()

//call the function when we need to execute it
UTIL_SEQ_SetTask()
</code></pre><p>There are other useful UTIL_SEQ functions for making the sequencer idle, pausing/resuming tasks, and managing events (the sequencer can be told to wait for an 'event', and then resume operation when that event is set or EvtIdle is called.  You can also check if an event is pending, and replace an existing WaitEvt event with a new one.)</p><p>All of these functions are defined in <code>Utilities/stm32_seq.c</code>.  </p><hr><p>CPU2 uses a '<strong>timer server' </strong>composed of virtual timers based on the Real Time Clock (RTC) wakeup timer.  After the initiation of the server with  <code>HW_TS_Init</code>, functions that create, start, stop, delete, etc virtual timers all follow the pattern <code>HW_TS_Command()</code>.  </p><p>All of these functions are define in <code>User/Core/hwtimerserver.c</code>.</p><hr><p>CPU2 uses a '<strong>low power sequencer</strong>' that can recieve input from 32 users and computes the lowest power state, and gives hooks for entering/exiting low power modes.  To use it, we create an ID <code>UTIL_LPM_bm_t ID</code> and set the low power mode for either 'off' and 'stopped' condition using <code>UTIL_LPM_SetOffMode(ID, state)</code>, and then call <code>UTIL_LPM_EnterLowPower()</code> .  Callbacks are called when entering/exiting these modes, of the form <code>UTIL_LMP_ExitOffMode</code>.</p><p>All of these functions are defined in <code>Utilities/stm32_lpm.c</code>. </p><hr><p>It's worth poking through the headers for <code>Middlewares/STM32_WPAN/ble_xxx</code> just to see how those map to our previous understanding of GAP, GATT, and the underlying host controller abstractions.</p><h3 id="digging-into-the-stm32wb-ble-code">Digging into the STM32WB BLE Code</h3><p>So, given our understanding of GAP, GATT, services, and BLE above, the rough structure we care about is:</p><p>(1) we define a <strong>Service </strong>in <code>User/STM32_WPAN/App</code> with a suffix <code>_app.c</code>, who only publicly expose a<em> </em><code>void ServiceAPP_Init()</code> function.  This function defines behavior; for the HRService, it uses timers from the timer server to call the sequencer <code>SetTask</code> at each interval defined in the HRContext to run the measurement task it registers with <code>UTIL_SEQ_RegTask</code>.  This registered function takes a measurement and updates the value for a characteristic using (ultimately) <code>aci_gatt_update_char_value</code>, which <em>(if we look at <code>ble_gatt_aci.h</code></em>) automatically will send data for notifications/indications that are enabled.</p><p>(2) These service <code>Init()</code> functions are called from <code>APP_BLE_Init()</code> in <code>User/STM32_WPAN/app_ble.c</code>.</p><p>(3) In main, our interface to the BLE peripheral is managed with <code>APPE_Init()</code> in tandem with the sequencer, which runs with a default parameters that force it to consider all registered tasks.  <code>APPE_Init()</code> is declared in <code>User/Core/app_entry.c</code>, where it initializes the timer server and power modes.</p><p>After initialization of the transport layer is called in <code>APPE_Init()</code>, we see:</p><pre><code>/**
* From now, the code is waiting for the ready event ( VS_HCI_C2_Ready )
* received on the system channel before starting the Stack
* This system event is received with APPE_SysUserEvtRx()
*/</code></pre><p> <code>APP_BLE_Init()</code> from (2) above is called from <code>APPE_SysUserEvtRx()</code> in <code>app_entry.c</code>;  <code>APPE_SysUserEvtRx()</code> is called when CPU2 sends a ready signal that comes from this <code>APPE_Init()</code> process initializing CPU2 in <code>app_entry.c</code>. </p><p>It's worth noting that the user section of <code>APPE_Init()</code> also calls <code>APPD_Init()</code> in <code>app_debug.c</code>, which sets up either HAL managed debugging or exposes debugging traces on the GPIO pins.</p><p>Below is the structure as elaborated in the Application Note AN5289.  The structure is not the simplest, so it's worth spending a little time poking around and getting familiar: </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/08/image-3.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/08/image-3.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2020/08/image-3.png 1000w, https://blog.davidbramsay.com/content/images/size/w1120/2020/08/image-3.png 1120w"><figcaption>from AN5289, page 41.</figcaption></figure><h3 id="the-example-services">The Example Services</h3><p><strong>DTM</strong> is a Direct Test Mode example in line with bluetooth spec– it ignores CPU1 and simply passes through UART commands from the UART peripheral.  This is great for hooking it up to the computer and making sure the RF circuitry works, but it's not great for us since it doesn't expose an interface for applications running on CPU1.</p><p>The <strong>HR example </strong>is nice because it notifies the master, but it is quite complex as we've seen, and it's intended for HR data.  This could be adapted for a service that requires the notifiction structure with a fair amount of work.  It implements the Bluetooth Spec defined HR Service.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/08/image-2.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/08/image-2.png 600w, https://blog.davidbramsay.com/content/images/size/w1000/2020/08/image-2.png 1000w, https://blog.davidbramsay.com/content/images/size/w1110/2020/08/image-2.png 1110w"><figcaption>Specific Abstractions for the HR example; from AN5289, page 64.</figcaption></figure><p>STM offers two proprietary services: the <strong>P2P Service</strong> and the <strong>FUOTA Service </strong>(Flash Update Over-The-Air).  The example we will actually use for <em>everything </em>we're going to do is the P2P service, which features two way communication between devices or a device and a smartphone application.  One characteristic of the service features a R/W value (polled and set by the central to interact with the LED); the other is a notification characteristic (pushed by the peripheral to the device asynchronously when a button is pressed on the peripheral).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/08/image-6.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/08/image-6.png 600w, https://blog.davidbramsay.com/content/images/size/w702/2020/08/image-6.png 702w"><figcaption>P2P structure; From AN5289, pg 72.</figcaption></figure><p>The P2P example advertises itself over GAP using a manufacturer specific packet.  This is still using the <strong>ADV_IND </strong>packet as described in the GAP section above, but in addition to using industry standard <strong>AD_Types </strong>like Service UUID or Local Name, it appends a Manufacturer specific AD_Type (=0xFF):</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/08/image-5.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/08/image-5.png 600w, https://blog.davidbramsay.com/content/images/size/w709/2020/08/image-5.png 709w"><figcaption>GAP packet for advertising the STM32 in P2P mode. 'Type' is Advertising Type, which is 0xFF for manufacturing specific type as specified by the Bluetooth SIG.</figcaption></figure><p>It is possible to set the device up as a router or in FUOTA mode, but we will care only about setting it up as a P2P server.  For this, we expect values of <strong>DevID</strong> = 0x83 (CFG_DEV_ID_P2P_SERVER1).  <strong>Group B Features</strong> are mostly reserved for future use (RFU) with the exception of turning on OTA reboot requess and Thread support, which we'll leave off (Group A and B should just be set to all zeros).   The last six bytes are the optional <strong>BD_ADDR</strong>, which is redundant in the payload. </p><p>We create the packet payload by adding AdData (the section of repeating <code>[Length | AD Type | AD Data]</code>) using <code>aci_gap_set_discoverable()</code> for both the manufacturer specific data and a local_name, to identify our application.</p><p>For full details, I encourage you to read through the AN5289 manual pages 67-77 (Section 7.4).</p><h2 id="moving-towards-our-own-application">Moving Towards Our Own Application</h2><p>Let's start by loading in and testing the <strong>BLE_p2pServer </strong>example application, the same way we did the HR example.  Import it into CubeMX, load/run it through the debugger onto our Nucleo board, and open the STM BLE example application to connect.  You'll see that we can control the LED from the app; we can recieve a timestamped alarm when the button is pressed; we can also turn on cloud logging where we control the logging interval!  This is a pretty nice general purpose example.  Hopefully this should only take you a minute or two to get running at this point; if not, it's better to work out the kinks in your toolchain first.</p><h3 id="our-goals">Our Goals</h3><p>Now we're going to use this P2P Server as a starting point for our own custom code.  We'll make the following changes, familiarizing ourselves with the code in the process:</p><ul><li>1. Change the human readable name of the device that is advertised</li><li>2. Use our React app to get notifications from the button</li><li>3. Use our React app to turn the LED on and off and query its state</li><li>4. Send a larger packet than a 0x00/0x01 from the peripheral to the central (a timestamp using the RTC, for instance)</li><li>5. Send a larger packet than a 0x00/0x01 from the central to the peripheral (a timestamp, for instance)</li><li>6. Log the incoming peripheral data and make sure it logs with the app in the background</li><li>7. Edit the Peripheral to store data that isn't recieved/when notifications are off and retransmit it when the Central bonds again</li></ul><h3 id="change-the-advertising-name">Change the Advertising Name</h3><p>First let's edit the human readable BLE advertised name.  We go to <code>STM32_WPAN/App/app_ble.c</code>, and in line 240 we see the start of the advertising data construction.</p><pre><code>#if (P2P_SERVER1 != 0)
static const char local_name[] = { AD_TYPE_COMPLETE_LOCAL_NAME ,'P','2','P','S','R','V','1'};
uint8_t manuf_data[14] = {
    sizeof(manuf_data)-1, AD_TYPE_MANUFACTURER_SPECIFIC_DATA, 
    0x01/*SKD version */,
    CFG_DEV_ID_P2P_SERVER1 /* STM32WB - P2P Server 1*/,
    0x00 /* GROUP A Feature  */, 
    0x00 /* GROUP A Feature */,
    0x00 /* GROUP B Feature */,
    0x00 /* GROUP B Feature */,
    0x00, /* BLE MAC start -MSB */
    0x00,
    0x00,
    0x00,
    0x00,
    0x00, /* BLE MAC stop */
};
#endif</code></pre><p>This is true for the various enumerated P2P_SERVERs on the following lines as well.  We can change the local name in this static const char local_name.</p><p>We also can change it in line 863:</p><pre><code>if (role &gt; 0)
  {
    const char *name = "P2PSRV1";
   ...
</code></pre><p>The first instance of these is the 'local_name'; the second instance is the 'name'.  These are advertised as two separate fields, but it's best practice for our applications just to make them the same thing.  </p><p><strong>A word of caution:</strong> I thought this hadn't worked when I updated it, for a <em>while</em>.  You might need to restart/repower the Nucleo <strong>AND </strong>whatever device you're using to look at BLE; these names can be cached opaquely at a low level.  They really aren't supposed to change.  If they don't update, reset everything, turn on/off bluetooth, move on, and come back to it later.</p><h3 id="read-and-write-the-button-state-in-react">Read and Write the Button State in React</h3><p>We can use the exact same code as for the HR example here, <strong>changing the name of the device</strong> we want to connect to in the scanAndConnect function (to whatever we set it to above).  It will connect and register for the notifications of the button pressing!  We can add a little code that will print things conditionally on the button press in our BLE render function, like so:</p><pre><code>{this.state.values['0000fe42-8e22-4541-9d4c-21edae82ed19']=='0101'
            ?
            &lt;Text&gt; Button Pushed
            &lt;/Text&gt;
            :
            &lt;Text&gt; Button NOT Pushed
            &lt;/Text&gt;
}</code></pre><p>The UUID is set for this notification of the button given the spec in AN5289 (the notify characteristic). </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.davidbramsay.com/content/images/2020/08/image-7.png" class="kg-image" alt srcset="https://blog.davidbramsay.com/content/images/size/w600/2020/08/image-7.png 600w, https://blog.davidbramsay.com/content/images/size/w817/2020/08/image-7.png 817w"><figcaption>We can see our UUID for the notify characteristic, and the expected values in the P2P spec.</figcaption></figure><p><strong>The above code will show 'Button Pushed' or 'Button NOT Pushed' in your React app, </strong>toggled by the actual button press!</p><h3 id="read-and-write-the-led-state-in-react">Read and Write the LED State in React</h3><p>Now let's connect and control the LED.   We can also see in our debugger that we have a characterstic with UUID '0000fe<strong>41</strong>-8e22-4541-9d4c-21edae82ed19' (as expected) that has <code>isReadable==True</code> and <code>isWritableWithoutResponse==True</code>.</p><p>We can see there are three options for writing a value to a characteristic without a response (<a href="https://github.com/Polidea/react-native-ble-plx/wiki/Characteristic-Writing">https://github.com/Polidea/react-native-ble-plx/wiki/Characteristic-Writing</a>)– each one uses an ble object at a different level of abstraction, and requires the missing information passed (blemanager object, the device object, and the characteristic itself).  We'll just register our characteristic and use <code>characteristic.writeWithoutResponse(valueBase64)</code>.</p><p>First we modify our constructor so our state has <code>writeCharacteristic</code> and <code>ledState</code> field:</p><pre><code>this.state = {info: "", values: {}, writeCharacteristic: null, ledState: false}
</code></pre><p>Now we'll modify our loop that runs through the the device characteristics to append the write characteristic we expect:</p><pre><code>if (c[i].isWritableWithoutResponse){
    console.log('saving characteristic that is writable!!')
    this.setState({writeCharacteristic: c[i]})
}</code></pre><p>Next, we'll make a write function that toggles the LED given our LEDState (we'll also need functions to convert a common sense hex value to base64 (adapted from <a href="https://stackoverflow.com/questions/23190056/hex-to-base64-converter-for-javascript">https://stackoverflow.com/questions/23190056/hex-to-base64-converter-for-javascript</a>):</p><pre><code>  
  hexToBase64(str) {
    return btoa(str.match(/\w{2}/g).map(function(a) {
        return String.fromCharCode(parseInt(a, 16));
    }).join(""));
  }

  toggleLED(){
    console.log('toggle LED function called!')
    var newLedVal = !this.state.ledState
    if  (this.state.writeCharacteristic){
        if (newLedVal){
        this.state.writeCharacteristic.writeWithoutResponse(this.hexToBase64('0101'))
        console.log('wrote ' + this.hexToBase64('0101'))
        }
        else {
        this.state.writeCharacteristic.writeWithoutResponse(this.hexToBase64('0100'))
        console.log('wrote ' + this.hexToBase64('0100'))
        }
        this.setState({ledState: newLedVal})
    }
  }

</code></pre><p>Finally we need a button to call this toggleLED function in the renderer:</p><pre><code>&lt;TouchableHighlight style={{borderColor: this.state.ledState ? 'green' : 'red', borderWidth: 4, borderRadius: 10, height:30, width:100, justifyContent:'center', alignItems:'center'}} onPress={this.toggleLED.bind(this)}&gt;
     &lt;Text&gt;LED IS {this.state.ledState ? 'on' : 'off'}&lt;/Text&gt;
&lt;/TouchableHighlight&gt;
</code></pre><p>And that's all there is to it! We now have a React app with a small button that tracks the LED state and toggles it on and off.</p><h3 id="send-more-information">Send More Information</h3><p>We're gonna bump up our data from 8 bits; for me, the target is really timestamped 16 bit values (RTC timestamps are 64 bits, so &gt;10 bytes).  The default configuration should work up to 20 bytes; if we need more, we need to edit the MTU size ( <code>CFG_BLE_MAX_ATT_MTU</code>  in app_conf and potentially renegotiate the connection using <code>aci_gatt_exchange_config</code> and <code>hci_le_set_data_length</code>), but for our purposes we'll stick within the 20 byte size.  Here we'll send 16 bytes.</p><p>First, let's take a look at the existing code for sending our current notification.  It exists across a  few different files:</p><pre><code> //from p2p_server_app.c
 P2PS_Send_Notification(void) {
 
   P2PS_STM_App_Update_Char(
     P2P_NOTIFY_CHAR_UUID, 
     (uint8_t *)&amp;P2P_Server_App_Context.ButtonControl
   );
 
 }
  
//from p2p_stm.c  CALLS:
  
P2PS_STM_App_Update_Char(uint16_t UUID, uint8_t *pPayload)  {
  
  aci_gatt_update_char_value(
    aPeerToPeerContext.PeerToPeerSvcHdle,
    aPeerToPeerContext.P2PNotifyServerToClientCharHdle,
    0, /* charValOffset */
    2, /* charValueLen */
    (uint8_t *)  pPayload
  );
}
 
 //from ble_gatt_aci.c  CALLS:
 
 aci_gatt_update_char_value(uint16_t Service_Handle,
                                      uint16_t Char_Handle,
                                      uint8_t Val_Offset,
                                      uint8_t Char_Value_Length,
                                      uint8_t Char_Value[])
                                      
                                      </code></pre><p>The last of these is updating a characteristic that is <strong>already registered with a certain packet size:</strong></p><pre><code>//from p2p_stm.c (that third line is the packet byte size

aci_gatt_add_char(aPeerToPeerContext.PeerToPeerSvcHdle,
                      UUID_TYPE_128, &amp;uuid16,
                      2,
                      CHAR_PROP_NOTIFY,
                      ATTR_PERMISSION_NONE,
                      GATT_NOTIFY_ATTRIBUTE_WRITE, /* gattEvtMask */
                      10, /* encryKeySize */
                      1, /* isVariable: 1 */
                      &amp;(aPeerToPeerContext.P2PNotifyServerToClientCharHdle));
 
 
//from ble_gatt_aci.c  CALLS:

tBleStatus aci_gatt_add_char(uint16_t Service_Handle,
                             uint8_t Char_UUID_Type,
                             Char_UUID_t *Char_UUID,
                             uint16_t Char_Value_Length,
                             uint8_t Char_Properties,
                             uint8_t Security_Permissions,
                             uint8_t GATT_Evt_Mask,
                             uint8_t Enc_Key_Size,
                             uint8_t Is_Variable,
                             uint16_t *Char_Handle)</code></pre><hr><p>First let's make a version of <code>P2P_STM_App_Update_Char</code> that accepts something other than a 2 byte char.  In <code>p2p_stm.c</code>:</p><pre><code>tBleStatus P2PS_STM_App_Update_Int16(uint16_t UUID, uint16_t *pPayload, uint8_t num_words) 
{
  tBleStatus result = BLE_STATUS_INVALID_PARAMS;
  switch(UUID)
  {
    case P2P_NOTIFY_CHAR_UUID:
      
     result = aci_gatt_update_char_value(aPeerToPeerContext.PeerToPeerSvcHdle,
                             aPeerToPeerContext.P2PNotifyServerToClientCharHdle,
                              0, /* charValOffset */
                             2*num_words, /* charValueLen */
                             (uint8_t *)  pPayload);
    
      break;

    default:
      break;
  }

  return result;
}</code></pre><p>And we need to declare our new function in the appropriate header:</p><pre><code>//STM32CubeWB/Middlewares/ST/STM32_WPAN/ble/svc/Inc/p2p_stm.h

...
//add this line
tBleStatus P2PS_STM_App_Update_Int16(uint16_t UUID,  uint16_t *pPayload, uint8_t num_words);
</code></pre><p>And of course we need to edit the original declaration of the characteristic so it's set up to recieve 16 bytes in <code>p2p_stm.c</code>.  We simply have to change the <code>Char_Value_Length</code> from <strong>2</strong> to <strong>16</strong>.</p><p>Now let's edit our button push  in <code>P2PS_Send_Notification</code> from <code>p2p_server_app.c</code> to try and send a larger packet:</p><pre><code>...

const uint16_t test_data[8] = {0x0123, 0x4567, 0x89AB, 0xCDEF, 0x0A0A, 0x1B1B, 0x2C2C, 0x3D3D};
  
if(P2P_Server_App_Context.Notification_Status){ 
  ...
  //comment out our old command to send ButtonControl Byte
  //P2PS_STM_App_Update_Char(P2P_NOTIFY_CHAR_UUID, (uint8_t *)&amp;P2P_Server_App_Context.ButtonControl);
  
  //call our new function
  P2PS_STM_App_Update_Int32(P2P_NOTIFY_CHAR_UUID, test_data, 8);
       
       </code></pre><p>With the above code, we'll see in our React App that pushing the button sends all the data; however the byte order is reversed ( <code>0x2301, 0x6745, 0xAB89, 0xEFCD, ...</code>).  However, if we use the same function and send an array of bytes instead of words, the correct order is preserved:</p><pre><code>//same test data as above, but as bytes
const uint8_t test_data[16] = {0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF, 0x0A, 0x0A, 0x1B, 0x1B, 0x2C, 0x2C, 0x3D, 0x3D};

//cast it to (uint16_t *) so it will work with our previous function
P2PS_STM_App_Update_Int16(P2P_NOTIFY_CHAR_UUID, (uint16_t *)&amp;test_data, 8);
</code></pre><p>We'll have to be careful when working with words on the STM32 so that the byte order is not reversed.</p><p>We also notice that we can send <em>less</em> data than the maximum specified packet size using this technique, without modification:</p><pre><code>//same test data as above, but half as much
const uint8_t test_data[8] = {0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF};

//cut the passed number of words in half, though the characteristic is the same and supports the full 16 bytes 
P2PS_STM_App_Update_Int16(P2P_NOTIFY_CHAR_UUID, (uint16_t *)&amp;test_data, 4);
</code></pre><p><strong>So Finally, </strong> we'll finish our edits here in a way that we can easily pass byte or word arrays:</p><pre><code>//IN p2p_stm.c

//SET aci_gatt_add_char length to *20*; that way we can send up to 20 bytes in each message

//ADD the following two functions:

tBleStatus P2PS_STM_App_Update_Int16(uint16_t UUID, uint16_t *pPayload, uint8_t num_words)
{

  uint16_t byte_reversed[num_words];

  for (uint8_t i = 0; i &lt; num_words; i++){
	byte_reversed[i] = (pPayload[i] &amp; 0xFF00) &gt;&gt; 8 | (pPayload[i] &amp; 0x00FF) &lt;&lt; 8;
  }

  tBleStatus result = BLE_STATUS_INVALID_PARAMS;
  switch(UUID)
  {
    case P2P_NOTIFY_CHAR_UUID:

     result = aci_gatt_update_char_value(aPeerToPeerContext.PeerToPeerSvcHdle,
                             aPeerToPeerContext.P2PNotifyServerToClientCharHdle,
                             0, /* charValOffset */
                             2*num_words, /* charValueLen */
                             (uint8_t *)  byte_reversed);

      break;

    default:
      break;
  }

  return result;
}

tBleStatus P2PS_STM_App_Update_Int8(uint16_t UUID, uint8_t *pPayload, uint8_t num_bytes)
{
  tBleStatus result = BLE_STATUS_INVALID_PARAMS;
  switch(UUID)
  {
    case P2P_NOTIFY_CHAR_UUID:

     result = aci_gatt_update_char_value(aPeerToPeerContext.PeerToPeerSvcHdle,
                             aPeerToPeerContext.P2PNotifyServerToClientCharHdle,
                             0, /* charValOffset */
                             num_bytes, /* charValueLen */
                             (uint8_t *)  pPayload);

      break;

    default:
      break;
  }

  return result;
}


//IN p2p_stm.h

//ADD the function handles:

tBleStatus P2PS_STM_App_Update_Int16(uint16_t UUID,  uint16_t *pPayload, uint8_t num_words);
tBleStatus P2PS_STM_App_Update_Int8(uint16_t UUID, uint8_t *pPayload, uint8_t num_bytes);


//now we can call this on byte arrays and word arrays up to 20 bytes long, and the byte order will be preserved correctly when we send.</code></pre><h3 id="sending-more-info-to-the-stm">Sending more info to the STM</h3><p>Within the <code>p2p_server_app.c</code> we find that all of the data we're sending back and forth is stored in <code>P2P_Server_App_Context</code>, which is a nice abstraction to help us keep data organized.</p><p>First let's add a <code>uint64_t OTATimestamp</code>, a <code>uint8_t OTA12HrFormat</code>, and a <code>uint8_t OTADaylightSavings</code> to the <code>P2P_Server_App_Context</code> struct declaration in <code>p2p_server_app.c</code>.  We're going to send a timestamp value over BLE from our React App.</p><p>Now we need to edit the payload handler to deal with our new timestamp value.  In the same file, we see <code>P2PS_STM_App_Notification</code> is the function that deals with incoming data.  We'll add one <code>memcpy</code> to push our timestamp value to the <code>OTATimestamp</code> field of the <code>P2P_Server_App_Context</code> struct.</p><pre><code>#if(P2P_SERVER1 != 0)  
      if(pNotification-&gt;DataTransfered.pPayload[0] == 0x01){ /* end device 1 selected - may be necessary as LB Routeur informs all connection */
        
    	memcpy(&amp;P2P_Server_App_Context.OTATimestamp, &amp;(pNotification-&gt;DataTransfered.pPayload[2]), 8);
        P2P_Server_App_Context.OTA12HrFormat = pNotification-&gt;DataTransfered.pPayload[10];
    	P2P_Server_App_Context.OTADaylightSavings = pNotification-&gt;DataTransfered.pPayload[11];


    	
    	if(pNotification-&gt;DataTransfered.pPayload[1] == 0x01)
        {
          BSP_LED_On(LED_BLUE);
          APP_DBG_MSG("-- P2P APPLICATION SERVER 1 : LED1 ON\n"); 
          APP_DBG_MSG(" \n\r");
          P2P_Server_App_Context.LedControl.Led1=0x01; /* LED1 ON */
        }
        if(pNotification-&gt;DataTransfered.pPayload[1] == 0x00)
        {
          BSP_LED_Off(LED_BLUE);
          APP_DBG_MSG("-- P2P APPLICATION SERVER 1 : LED1 OFF\n"); 
          APP_DBG_MSG(" \n\r");
          P2P_Server_App_Context.LedControl.Led1=0x00; /* LED1 OFF */
        }
      }
#endif</code></pre><p>We should alter our init code for this struct later on in the file as well to start the default values at 0x00:</p><pre><code>void P2PS_APP_LED_BUTTON_context_Init(void){
  
  BSP_LED_Off(LED_BLUE);
  
  #if(P2P_SERVER1 != 0)
  P2P_Server_App_Context.LedControl.Device_Led_Selection=0x01; /* Device1 */
  P2P_Server_App_Context.LedControl.Led1=0x00; /* led OFF */
  P2P_Server_App_Context.OTATimestamp=0x0000000000000000;
  P2P_Server_App_Context.OTA12HrFormat=0x00;
  P2P_Server_App_Context.OTADaylightSavings=0x00;
  P2P_Server_App_Context.ButtonControl.Device_Button_Selection=0x01;/* Device1 */
  P2P_Server_App_Context.ButtonControl.ButtonStatus=0x00;
#endif</code></pre><p>We'll also send that data back on a button press; where we had modified our <code>P2P_Send_Notification</code> function to send random test data, let's edit it to send the 64 bit timestamp value instead:</p><pre><code>void P2PS_Send_Notification(void)
{
 
  if(P2P_Server_App_Context.ButtonControl.ButtonStatus == 0x00){
    P2P_Server_App_Context.ButtonControl.ButtonStatus=0x01;
  } else {
    P2P_Server_App_Context.ButtonControl.ButtonStatus=0x00;
  }
  
  if(P2P_Server_App_Context.Notification_Status){ 
    APP_DBG_MSG("-- P2P APPLICATION SERVER  : INFORM CLIENT BUTTON 1 PUSHED \n ");
    APP_DBG_MSG(" \n\r");
    
    P2PS_STM_App_Update_Int8(P2P_NOTIFY_CHAR_UUID, (uint8_t *)&amp;P2P_Server_App_Context.OTATimestamp, 8);

    
   } else {
    APP_DBG_MSG("-- P2P APPLICATION SERVER : CAN'T INFORM CLIENT -  NOTIFICATION DISABLED\n "); 
   }

  return;
}</code></pre><p>Just like in the case for the STM writing more than 2 bytes, we need to change the max size when we register our characteristic using <code>aci_gatt_add_char</code> , as well as our <code>aci_gatt_update_char_value</code> in <code>p2p_stm.c</code> (here we again make it up to 20 bytes, here we have to update it in <strong>two places</strong>):</p><pre><code> /**
     *  Add LED Characteristic
     */
    COPY_P2P_WRITE_CHAR_UUID(uuid16.Char_UUID_128);
    aci_gatt_add_char(aPeerToPeerContext.PeerToPeerSvcHdle,
                      UUID_TYPE_128, &amp;uuid16,
                      20,                                   
                      CHAR_PROP_WRITE_WITHOUT_RESP|CHAR_PROP_READ,
                      ATTR_PERMISSION_NONE,
                      GATT_NOTIFY_ATTRIBUTE_WRITE, /* gattEvtMask */
                      10, /* encryKeySize */
                      1, /* isVariable */
                      &amp;(aPeerToPeerContext.P2PWriteClientToServerCharHdle));
   
   
                      
...



tBleStatus P2PS_STM_App_Update_Char(uint16_t UUID, uint8_t *pPayload) 
{
  tBleStatus result = BLE_STATUS_INVALID_PARAMS;
  switch(UUID)
  {
    case P2P_NOTIFY_CHAR_UUID:
      
     result = aci_gatt_update_char_value(aPeerToPeerContext.PeerToPeerSvcHdle,
                             aPeerToPeerContext.P2PNotifyServerToClientCharHdle,
                              0, /* charValOffset */
                             20, /* charValueLen */
                             (uint8_t *)  pPayload);
    
      break;

    default:
      break;
  }

  return result;
}/* end P2PS_STM_Init() */
</code></pre><p><strong>NOTE: </strong>the GATT will cache aspects of the service, including the packet size.  There is a Service Changed Characteristic for when you're doing this on the fly, but in our cases it actually probably makes sense to just be aware that updates might require a restart of various devices to populate.</p><p><strong>EXTRA NOTE: IF YOU ARE HAVING ISSUES WITH HOT-RELOADING OF YOUR JAVASCRIPT CODE, GO TO THE NETWORK TAB IN THE BROWSER INSPECTION TOOLS AND MAKE SURE 'DISABLE CACHE' IS CHECKED!!!!</strong></p><p>Let's grab a javascript date and put it in BCD format (in which a month like December, month 12, is coded as 0x12– we use hex bytes to represent the decimal values).  Here's some code I wrote to generate a timestamp object we can then work with on the STM32:</p><pre><code>//at the very top of the javascript file:

Date.prototype.stdTimezoneOffset = function () {
  var jan = new Date(this.getFullYear(), 0, 1);
  var jul = new Date(this.getFullYear(), 6, 1);
  return Math.max(jan.getTimezoneOffset(), jul.getTimezoneOffset());
}

Date.prototype.isDstObserved = function () {
    return this.getTimezoneOffset() &lt; this.stdTimezoneOffset();
}
  
  
  
//as a member function:

getDateInBCD(format12 = true) {
    //returns DAY (1byte) MONTH (1byte) DATE (1byte) YEAR (1byte) HR (1byte)
    //MIN (1byte) SEC (1byte) 12HRFORMAT (1byte, 00=24HR) AMorPM (1byte, 00=AM) 
    //DAYLIGHTSAVINGS (1byte, 00=None 01=Add1hr) 
    //all as a string

    console.log('Constructing Date...')
    //BCD means we use 0x01-0x12, skipping 0x0A-0x0F (hex *reads* right)
    var day = ("0" + new Date().getDay()).slice(-2);          //uint8_t 0x01-0x07, Mon-Sun
    var month = ("0" + (new Date().getMonth() + 1)).slice(-2);  //uint8_t 0x01-0x12
    var date = ("0" + new Date().getDate()).slice(-2);        //uint8_t 0x01-0x31
    var year = String(new Date().getFullYear()).slice(-2);    //uint8_t 0x20

    var hour = new Date().getHours();    //uint8_t Hours 0x00-0x023 if RTC_HourFormat_24, 0x00 to 0x12 if RTC_HourFormat_12
    var min  = ("0" + new Date().getMinutes()).slice(-2); //uint8_t Min 0x00 to 0x59
    var sec  = ("0" + new Date().getSeconds()).slice(-2); //uint8_t Sec 0x00 to 0x59

    //uint8_t TimeFormat to 0x00 for FORMAT12_AM, 0x40 for FORMAT12_PM
    var formatAM = hour &gt;= 12 ? 1 : 0;
    if (format12) { hour = hour % 12; hour = hour ? hour : 12;}
    hour = ("0" + hour).slice(-2);

    //uint32_t DayLightSavings; use RTC_DAYLIGHTSAVINGS_SUB1H, RTC_DAYLIGHTSAVINGS_ADD1H, or RTC_DAYLIGHTSAVING_NONE
    var daylight = new Date().isDstObserved() ? 1 : 0; // if 1, ADD1H; else NONE

        return day + month + date + year + hour + min + sec  + '0' + formatAM  + '0' + (format12 ? 1 :0) + '0' + daylight;


  }
  
  
//now we edit our button toggle to send this info as well:
  
...
if  (this.state.writeCharacteristic){
 if (newLedVal){
  
var timestamp_string = this.getDateInBCD()
          this.state.writeCharacteristic.writeWithoutResponse(this.hexToBase64('0101' + timestamp_string))

console.log('wrote 0x0101' + timestamp_string + ' == ' + this.hexToBase64('0101' + timestamp_string))

}
...
</code></pre><p>This sends 10 bytes – the first 8 will fit into our OTAtimestamp and include DAY:MONTH:DATE:YEAR:HOUR:MIN:SEC:AMPM.  The last 2 bytes are format bytes which indicate whether it is in 24 or 12 hour format, and whether it is currently daylight savings or not and can be saved as two uint8_t vals.</p><p>All of this should give us a round trip where we can easily see our timestamp, generated from our app, sent to and stored on the STM32, and then sent back!</p><h3 id="a-note-on-endianness">A Note on Endianness</h3><p>Both systems are Little Endian, but BLE communication works in a Big Endian fashion.  To fix this, we need to reverse the bytes when sending/receiving, which I've chosen to do only on the javascript side of things.  I've made the following changes to ensure my packets follow the right order:</p><pre><code>reverseBytes(str){
  //bytes are 2 chars long
  //both systems are Little Endian; transport protocol is Big Endian
  //thus, data always gets flipped in transit

 s = str.replace(/^(.(..)*)$/, "0$1"); // add a leading zero if needed
 var a = s.match(/../g);             // split number in groups of two
 a.reverse();                        // reverse the groups
 return a.join("");                 // join the groups back together
}

...

  updateValue(key, value) {
    hexval = this.reverseBytes(this.base64ToHex(value));
    console.log('update ' + key + ' : ' + hexval)
    this.setState({values: {...this.state.values, [key]: hexval}})
  }

...

var timestamp_string = this.getDateInBCD()
this.state.writeCharacteristic.writeWithoutResponse(this.hexToBase64(this.reverseBytes('0101' + timestamp_string)))
console.log('wrote 0x0101' + timestamp_string + ' == ' + this.hexToBase64('0101' + timestamp_string))
</code></pre><p>Stay tuned for more posts exploring the link between STM32 and React-Native over BLE!</p><p>See <a href="https://github.com/dramsay9/react-stm32-bluetooth-example">https://github.com/dramsay9/react-stm32-bluetooth-example</a> for the working code.</p><p></p><p><strong>Quick Reference</strong></p><p>For react, open workspace file.  Run on phone, open <a href="http://localhost:8081/debugger-ui/">http://localhost:8081/debugger-ui/</a>.  Make sure it's connected to the same wifi as your phone.</p><p></p><p>'Device not found on target' means hold down the reset button on the nucleo, and let go after reaches 'waiting for debugger connection'.</p><p></p><p>'Could not locate device support files' means you need to download the right files from here: <a href="https://github.com/iGhibli/iOS-DeviceSupport/tree/master/DeviceSupport">https://github.com/iGhibli/iOS-DeviceSupport/tree/master/DeviceSupport</a> and then </p><p><em>Then, go to <code>Applications -&gt; Xcode</code>. Right click and open Show Package Contents. Then, paste to <code>Contents -&gt; Developer -&gt; Platforms -&gt; iPhoneOS.platform -&gt; DeviceSupport</code> and restart <code>Xcode</code>.  </em>(from <a href="https://stackoverflow.com/questions/39655178/xcode-could-not-locate-device-support-files">https://stackoverflow.com/questions/39655178/xcode-could-not-locate-device-support-files</a>)</p><p></p><p>Background capabilities requires <code>restoreStateIdentifier</code> and <code>restoreStateFunction</code> passed to <code>BLEManager</code>.</p>]]></content:encoded></item><item><title><![CDATA[Why this Blog?]]></title><description><![CDATA[<p></p><p>This blog is my attempt to reflect and capture thoughts about how and why our technology seems to be so out of sync with what is actually good for our well-being and what we need to do to realign these incentives.  I'm very interested in how attention, cognition, and behavior</p>]]></description><link>https://blog.davidbramsay.com/why/</link><guid isPermaLink="false">5ee98c9bdecc651200090097</guid><dc:creator><![CDATA[David Ramsay]]></dc:creator><pubDate>Wed, 17 Jun 2020 05:11:25 GMT</pubDate><content:encoded><![CDATA[<p></p><p>This blog is my attempt to reflect and capture thoughts about how and why our technology seems to be so out of sync with what is actually good for our well-being and what we need to do to realign these incentives.  I'm very interested in how attention, cognition, and behavior underlie our sense of happiness and fulfillment, and the mediating role of technology in that story.</p><p>So that explains 'tech' and 'cognition', but why 'epistemology'?  I've found that during an honest review of the psychology literature, it doesn't take long to run into serious difficulties in discerning truth.  In many ways we live in a post-truth society, and the perverse incentives of the academic world have sadly made that no less true in the softer branches of science.  </p><p>There is an ongoing replication crisis in social psychology, and the way we interact is changing at such a rapid pace that traditional research methods are being left in the dust.  <a href="https://www.theatlantic.com/science/archive/2018/08/scientists-can-collectively-sense-which-psychology-studies-are-weak/568630/">A famous Nature paper</a> showed that the best way to tell whether a study in a major psychology journal replicates is with prediction markets– in other words, our common sense is currently better at revealing psychological truths than the scientific process and peer-reviewed publication.  At the core of the replication problem is a profound crisis of statistical literacy and statistical technique.  </p><p>To say anything true about our cognition and psychology requires a deep dive into the foundations of statistics and a healthy skepticism of modern publications.  Who can we trust?  How do we evaluate them?  What are the worthwhile theories?  How can we identify quality work and reliable data without recreating the experiments ourselves?  How much should we trust our intuition, and what builds our intuition in the first place? </p><p>It goes one level further.  An epistemological shift is required to re-evaluate the scientific literature, but it's also necessary<em> to model</em> how we think and behave.  The top computational cognitive scientists– in designing systems that replicate human cognition most accurately– are championing stronger causal inductive biases compared to the typical statistical techniques of deep learning.  </p><p>Epistemology is explicit in cognitive science, as one of its core pursuits is to model the way we form beliefs and acquire knowledge.  Researchers in this area naturally <em>also</em> advocate for those principles at higher levels of abstraction.</p><p>A compelling line of argument suggests that our statistical techniques should reflect our innate epistemological processes.  We have some innate ability to discern truth and model the world, and that innate kernel can be our only arbiter of truth; in so much that modern statistical practices don't reflect that, they are failing.  </p><p>To say it another way, our minds seem to work in a Bayesian way, so we should build Bayesian models of how humans reason.  When we compare several models against each other, we should <em>also </em>use Bayesian techniques to reason about which one is the best.  The optimal process that captures truth in science is the one that most accurately mirrors our innate truth-seeking abilities. It's turtles all the way down.   </p><p>This is one of the most interesting and important discussions of the modern era– the bedrock of the scientific method and of model building is actively and fiercely debated.  We need to re-examine how we think about statistics and how we use its techniques to describe and predict our world.  It turns out these questions are more relevant now than ever, and we're on the brink of a major paradigm shift.</p><p>Before we can build cognition-supporting technology, we need to understand cognition.  Before we can do that, we need tools to fairly evaluate the scientific claims of social psychology and tools to model human cognition.</p><p>Ultimately, I hope that with a thorough understanding of statistics and inquiry, we can build a foundation of trustworthy scholarship that will chart the way forward.  With it, we can push the future of technology towards tools that make a real, positive impact on our behavior, attention, and feelings.    </p>]]></content:encoded></item></channel></rss>