• Using an E-Ink Monitor

    April 2025 Edit: It’s a year later and I have some more tips for effectively using this monitor – Part 2.



    I recently bought an e-ink monitor: Dasung’s 2019 13.3” Paperlike HD-F. This post covers my knee-jerk reactions after having used it for a few days. There are other more in-depth reviews of it online.


    DASUNG monitor sleep screen


    DASUNG monitor indoors

    • I typically spend about six to twelve hours a day using my laptop, a MacBook Pro, mostly for coding, writing, and reading
    • I got this e-ink monitor to help with eye strain, specifically when working outside in direct sunlight
    • This Reddit post inspired me to get the Dasung monitor specifically
    • Monitor visibility in sunlight is great
      • Fully legible, even with polarized sunglasses
      • Worked outside for a long time without squinting or eye strain
      • I haven’t used it with the screen facing the sun; there might be some glare because there’s a glass covering.
    • There’s a newer model than the one I have
    • I bought this older 2019 model used, but it was still pretty expensive ($600 USD)
    • Here’s some unofficial information about the various models
    • The reported resolution when I plugged it in was 1100x825@2x
    • Refresh rate is configurable with software, but the default is fine for my usage. No noticeable lag when typing. Scroll lag is cromulent. Cursor lag is bad. Watching videos is doable but I wouldn’t recommend it.
    • Ghosting is annoying (see image below)
      • There’s a button on the monitor to clear ghosting
    • Dasung makes PaperLikeClient software (see screenshot)
      • Apparently this lets you refresh the screen with a keyboard shortcut or on a timed trigger
      • But didn’t work on my laptop (didn’t recognize connected monitor). I reached out to Dasung support, haven’t heard back yet.
      • My understanding is that there’s a tradeoff between refresh speed and ghosting
    • Visually, it’s really nothing like the the laptop’s built-in display (for better or worse)
    • It has a backlight (the “F” in “HD-F” stands for front light; not all the models have this), which is nice. AFAICT you can’t change the brightness level without the client software. There are two color temp options accessible from the hardware buttons.
    • Usable for programming
      • I use an E-ink theme for VSCode
      • Obviously not great if you do frontend work
    • Physically kinda bulky
      • I’ve been using a chip clip to hold the monitor to my laptop screen and prevent it from sliding
      • Sometimes its heft makes the screen tip over; hopefully not damaging the laptop hinge
    • I plug the USB and HDMI cables in via a USB-C adapter
    • Guessing the battery life is improved since I keep the primary display off when using the monitor
    • Blocks MBP camera :(
    • Tweaking the computer display settings helps with legibility
      • I manage this with a Shortcut to make connecting/disconnecting less painful
        • Turn color filters on (grayscale 100%)
        • Turn increase contrast on
        • Turn reduce motion on
        • Set (built-in display) brightness to 0%
    • Hard to strike a balance between text legibility and fine detail for icons, colors, etc
    • Seems like I can focus for longer periods when using it?
      • Maybe it’s a placebo
      • Or the novelty of using it for work is fun
      • Or the lower resolution forces me to concentrate on less information at once
      • Or the fact that images and videos look crappier so there are fewer things to steal my attention


    DASUNG monitor ghosting

    DASUNG PaperLikeClient UI

    These past few days using the Dasung have been enjoyable. Time will tell whether this monitor becomes a daily driver for me, or an odd peripheral I dig out when I need to like invert a binary tree at the beach.

    I look forward to e-ink technology developing. Less ghosting, faster refresh, higher resolution, full color support, lower price point, thinner/lighter profile, and better software integration would make this device really incredible. Today it almost feels like a prototype, but even with its current limitations, I can already envision how it might change the way I use my computer.

    An integrated laptop experience would be interesting. I could see using an e-ink laptop for work, and maybe even as a personal device if the video-watching experience is compelling enough. The assumed battery life increase is an obvious win.

    There’s something deeply appealing, to me at least, about the prospect of being able to use my computer comfortably, regardless of ambient light. Being able to see the screen at the park, the shore, the stoop, the deck, or even just by a sunny window makes the act of using a computer feel more human.

    It might be a good thing that sunshine and being on my laptop have (so far) been mutually exclusive – why sully a nice day with The Algorithm? Maybe AI and/or VR will make this form factor obsolete one day. But it’s nice to picture a timeline where one can author a blog post from a lawn chair without squinting.


    DASUNG monitor outside


  • Rendering ChatGPT HTML Output Inline

    TL;DR: With a little bit of glue, you can render and evaluate ChatGPT’s raw HTML and JavaScript output directly in the chat interface!!


    While playing with ChatGPT, I found myself wanting to see its output.

    ChatGPT can create HTML code, and it functions within a web browser. Since the browser is able to display HTML, is it possible to use it to display ChatGPT’s output?

    Indeed, with just a bit of glue code we can see and interact with the output!

    Here is some JS that, when evaluated, will render ChatGPT’s output (and yeah, ChatGPT helped me write this). Note I only tested this in Firefox.

    function replaceLastCodeBlock() {
      var codeBlocks = document.getElementsByTagName("code");
      var lastCodeBlock = codeBlocks[codeBlocks.length - 1];
      if (!lastCodeBlock) {
        return;
      }
      var htmlContent = lastCodeBlock.innerText;
      var fragment = document.createRange().createContextualFragment(htmlContent);
      lastCodeBlock.parentNode.replaceChild(fragment, lastCodeBlock);
    
      var elements = document.getElementsByClassName("bg-black");
      for (var i = 0; i < elements.length; i++) {
        elements[i].classList.remove("bg-black");
      }
    }
    replaceLastCodeBlock();
    

    Disclaimer: It’s probably not super secure to haphazardly evaluate code produced by a machine learning model; use at your own risk.


    Let’s see some examples…


    Here’s a simple showcase. The script above makes the browser render ChatGPT’s output.

    basic HTML rendering

    CSS and JavaScript are evaluated.

    displaying CSS and evaluating JS

    ChatGPT can produce SVG code, which also can be rendered.

    SVG beautiful

    Obligatory recursive ChatGPT in an iframe.

    SVG

    You can make the script feed input back into ChatGPT.

    SVG

    And data can be fetched from the internet.

    SVG

    Combining data retrieval and feedback, you can jury-rig more advanced prompting with context. This is pretty brittle, and notice it erroneously outputs its knowledge cutoff year.

    SVG

    Animations can be rendered

    SVG

    A fun Game

    SVG

    ChatGPT’s ability to generate correct code is impressive in its own right. Being able to easily see and interact with the evalated artifacts of that textual output makes the tool more fun. Hopefully this little demo is thought-provoking, enjoy!


  • Responding to recruiter emails with GPT-3

    If you’re just interested in the code, here it is.

    Like many software engineers, each week I receive multiple emails from recruiters.

    I’m grateful to work in a field with such opportunities, and I know that receiving a lot offers to interview is a good problem to have. But, practically, most of the time I’m not looking for a new job, and so handling all these emails is a recurring administrative task.

    Here’s an example thread that I neglected to respond to:

    screenshot of emails from a recruiter

    I do try to respond to all recruiter emails with a short message that pretty much always follows a format like:

    Hi <recruiter name>,

    Thanks for reaching out! I’m not interested at this time, but I’ll keep <your company> in mind.

    - Matt

    There are a few reasons that I respond to these emails (rather than merely ignore them):

    1. It’s polite
    2. If I don’t respond, the recruiter will often send multiple follow-up emails
    3. Maintaining a cordial relationship with the recruiters is in my best interest for future job searches

    I use some rough email filtering rules to funnel recruiter emails to an email folder. Then, when I have time, I go through the list of unreads and send my little response.

    It would be ideal if I could automate sending these responses. Assuming I get four such emails per week and that it takes two minutes to read and respond to each one, automating this would save me about seven hours of administrative work per year.

    A trivial approach would be to send a canned response. But a touch of personalization would aid in my goal of maintaining a good relationship with the recruiter.

    Extracting the name of the recruiter and their company from the email using a rule-based approach / trying to parse the text would be really tricky and error prone. Luckily, OpenAI’s GPT-3 language model is quite good at processing this email.

    Using the GPT-3 API, we can provide the recruiter’s email along with an example, and extract the required information. It can even format the output as JSON.

        def get_recruiter_name_and_company(email_text: str):
            """Uses OpenAI text models to automatically parse the recruiter's name
            and company from their email."""
        
            prompt = f"""
            Given an email from a recruiter, return the recruiter's first name and the recruiter's company's name formatted as valid JSON.
        
            Example: ***
            Email:
            '''
            Hi Matt! This is Steve Jobs with Apple Computer Company! I'm interested in having you join our team here.
            '''
        
            Response:
            {{"name": "Steve", "company": "Apple Computer Company"}}
            ***
        
            Email:
            '''
            {email_text}
            '''
        
            Response:
            """
        
            # don't make expensive OpenAI API calls unless operating in production
            if not IS_PROD:
                return json.loads('{"name": "Steve", "company": "Apple Computer Company"}')
        
            completion = openai.Completion.create(
                model="text-davinci-002",
                prompt=textwrap.dedent(prompt),
                max_tokens=20,
                temperature=0,
            )
        
            return json.loads(completion.choices[0].text)

    Here’s an example from the OpenAI Playground.

    screenshot of the OpenAI playground

    With the recruiter’s name and company in hand, responding is just a matter of interpolating those variables into the body of my standard response template:

    response = f"""\
    Hi {recruiter_name or ""},
    Thanks for reaching out! I'm not interested in new opportunities at this time, but I'll keep {recruiter_company or "your company"} in mind for the future.
    Thanks again,
    {SIGNATURE}
    """
    

    IMAP and SMTP are used to interface with the mailbox. The rest of the code can be found in this repo.

    This solution worked well for the handful of emails I tried it on. I’m planning to run this on a cron to save myself some time and automatically maintain recruiter relationships.


  • Casting YouTube to Chromecast on Macos (with VLC)

    I have a Chromecast. I use Firefox. The only extension for casting content from Firefox that I found is fx_cast, but I wasn’t able to get it to work properly :(

    This post describes a solution I hacked together to cast video without Chrome (and without an Android emulator). It relies on VLC to do the actual casting. VLC is capable of (a) opening video from a network stream, and (b) using a Chromecast as the video renderer.

    The script just copies the URL of the current Firefox tab and then invokes the VLC command line interface, passing in the video URL. I invoke the script with Alfred when FF is browsing to a YouTube video (haven’t tested with other streams).

    use scripting additions
    use framework "Foundation"
    
    on get_url_ff()
    	tell application "Firefox" to activate
    	
    	set thePasteboard to current application's NSPasteboard's generalPasteboard()
    	set theCount to thePasteboard's changeCount()
    	
    	tell application "System Events"
    		keystroke "l" using {command down}
    		delay 0.2
    		keystroke "c" using {command down}
    	end tell
    	-- hacky heuristic to help ensure the URL is copied
    	repeat 20 times
    		if thePasteboard's changeCount() is not theCount then exit repeat
    		delay 0.1
    	end repeat
    	
    	set the_url to the clipboard
    	
    	return the_url as text
    end get_url_ff
    
    on chromecast_tab_ff()
    	set the_url to get_url_ff()
    	-- how to get chromecast IP https://old.reddit.com/r/Chromecast/comments/8nu0d7/how_to_find_chromecasts_ip/dzyenff/
    	
    	do shell script "/Applications/VLC.app/Contents/MacOS/VLC " & quoted form of the_url & " --sout \"#chromecast\" --sout-chromecast-ip=192.168.0.198 --demux-filter=demux_chromecast"
    end chromecast_tab_ff
    
    chromecast_tab_ff()
    

  • OMSCS Retrospective

    At the end of 2021, I finished earning my master’s degree in computer science through Georgia Tech’s OMSCS program. This post is a look back on that experience. Previously, I wrote about my motivation for enrolling in OMSCS.

    In terms of time, it took me 4.5 years to complete the program. I was working full time during this period, so I only took one course per semeseter (except for Fall 2020 when I doubled up). I also didn’t take any classes during the summer semesters. Fitting school around my work schedule was doable. My normal routine was school work three weeknights and one weekend day. I probably averaged ~10 hours per week on coursework and studying, though the workload varied depending on the class. I was able to earn a 4.0, and I never felt like I was doing an unreasonable amount of work. The course workloads aggregated on OMSCentral seem relatively accurate, but personally I think I spent less time than what’s listed there.

    The program did sometimes put a strain on my social life, but from spring 2020 onward we were under COVID-19 restrictions anyway. It was often a drag to finish an entire day of work, only to then have to study for a test or implement a programming assignment. I know people complete their degree while caring for dependents, and it’s hard to imagine how they make it work. Now that I’m done, I’m glad to have some more free time back in my life.

    In direct financial terms, the entire degree costed almost $8k, some of which was covered by my employer. Whether the degree has paid (or will pay) for itself is unclear. I’m not intending, in my next job search, to target only jobs for which an M.S. is required. I believe I’m a stronger engineer for having completed the program, but any future success in my career probably won’t be directly attributable. My OMSCS specialization was Machine Learning, but I don’t intend to pivot my career to an ML focus. “Artificial Intelligence” has an aura of extreme hype, and I think my ML specialization has helped me to regard AI with a more informed and critical perspective.

    I think the experience served the goals I’d set for it. Namely, it gave me a structured way to learn more about sub-fields of CS that I wasn’t normally exposed to in my daily work as a web software engineer. Probably most importantly, it strengthened my learning ability, which is of course hugely applicable. Fighting impostor syndrome is an ongoing battle, but I think being able to understand the course material also helped in that regard.

    My advice to OMSCSers…

    • Create a schedule and stick to it. I didn’t start doing this until about halfway through – till then I didn’t have a clear boundary between personal time and “school” time, and so I was under a constant low level stress that I should be doing school work.
    • Watch all the lecture material. If you don’t understand something, rewatch it.
    • Do all the homework and follow the schedule prescribed by each course.
    • Attend office hours or watch recordings afterward. There is a lot of elucidating conversation there, and TAs will often go into depth on issues that are immediately relevant to exams and homework.
    • Participate in Piazza and unofficial channels like Slack. Interacting with other students helps solidify understanding and it’s one of the benefits of a program like OMSCS over self-study.
    • As in all of life, don’t be afraid to ask questions.

    The rest of this post is a list of the courses I took and some brief notes about each one…

    Fall 2017: Intro to High-Performance Computing

    I took this course first because it was supposedly really challenging and really good. I wanted to see what I was getting myself into. It was indeed pretty hard! The programming assignments were in C and C++, with which I was rusty. I’d also been out of college for five years. The course material was about distributing massive workloads on supercomputers using tools like MPI. The recorded lectures were engaging and entertaining and the lab assignments were nontrivial. In retrospect, it was a rewarding and fun class despite being challenging. I’m glad I took it first. I probably spent the most time on this course.

    Spring 2018: Machine Learning for Trading

    This class was a broad introduction to statistical methods like regression, Q-Learning, and KNN, and to financial concepts like market mechanics, valuing companies, and technical analysis. The course was much easier than HPC. The lectures were entertaining. It turned out to be a good primer for other concepts that are covered throughout the Machine Learning specialization, and I’m glad I took it before ML. And practically, it was a nice introduction to working with standard Python tools like NumPy, Pandas, etc. This course was also where I first learned about options.

    Fall 2018: Computer Vision

    This class covered a lot of material, including: linear image processing, Hough transforms, feature detection, optical flow, camera calibration, and tracking. I was surprised how powerful classical computer vision algorithms could be. CV was my first introduction to the Kalman filter, which would crop up in other courses as well. My final project for the course was on augmented reality: projecting an object into a 3D video. It was cool to see how this technology works under the hood. CV was one of my favorite classes.

    Spring 2019: Machine Learning

    ML is another class with high-quality lecture production. It’s a great overview of supervised learning, unsupervised learning, and reinforcement learning. The assignments are writing-heavy. I really liked that aspect of it, because by writing about the output and behavior of various ML algorithms, it helped me develop intuition about how these tools worked. I had previously taken Andrew Ng’s Machine Learning course, and so I felt well prepared for this class.

    Fall 2019: Artificial Intelligence for Robotics

    The lectures for this course are taught by Sebastian Thrun, the founder of Google’s self-driving car team. Dr. Thrun held office hours for the course as well, which was cool. The material covers basic robotics algorithms, with a focus on robotic vehicles: Kalman and particle filters, search algorithms like A*, PID controllers, and SLAM. The assignments for this course were fun because you get to drive a little robotic actor through scenes. After this class is when I started becoming anxious to wrap up the program.

    Spring 2020: Simulation and Bayesian Statistics

    Spring 2020 was the only term where I took two courses simultaneously. I imagined they would have some overlapping ideas, since they were both stats classes, and neither seemed especially hard. I managed to get an A in both courses, but it was definitely a lot of effort to coordinate the workloads and fit them into my schedule.

    Simulation and Modeling for Engineering and Science was an interesting course. It was all about simulation systems: hand simulations, monte-carlo, the Arena simulation language, random variate generation, and input and output analysis. The lectures were entertaining, and there was a lot of material. This course was a little harder than I expected, probably because I didn’t have a strong stats background.

    Bayesian Statistics had some overlapping ideas about probability distributions and monte-carlo methods. It was a deep-dive on Bayes Theorem and Bayesian analysis. The material covered Bayes formula, Bayesian networks, OpenBUGS, Bayesian inference, Bayesian computation, MCMC methodology, and more. This course helped me think in Bayesian terms, which is sometimes counter intuitive.

    Fall 2020: Data and Visual Analytics

    DVA is a broad introduction to data visualization. This was the first course I took where there was a group project. The course touched on data collection, data cleaning, SQLite, data integration, data analytics, Hadoop, Spark, D3, classification, and ensemble methods. I wasn’t crazy about this course. There didn’t seem to be any cohesion between the various ideas, and there was just a superficial coverage of the topics. I did learn how to use D3, though, which was useful.

    Spring 2021: Deep Learning

    This course was quite interesting. Most of the hype-generating news in the Machine Learning world is related to Deep Learning, so it was fascinating to learn how these powerful models actually work. The course covered neural networks and gradient descent, optimization of deep networks, convolutional neural nets, pooling layers, PyTorch, bias and fairness, language models, embeddings, transformers, attention, and generative models. This was another one of my favorite classes.

    Fall 2021: Graduate Algorithms

    GA has earned a reputation of being difficult, and it is a core requirement. As many students do, I took this as my final course. It was challenging, but I put in plenty of effort and had no issues. It covers dynamic programming, graph algorithms, and NP-completeness all in depth. The material is well organized. The grading is heavily based on three exams spaced throughout the course, but the homeworks do a good job of preparing one for them. It felt rewarding to complete this class, and I was glad to brush up on concepts I hadn’t studied since undergrad.