-
Apr 2, 2025
Using an E-Ink Monitor: Part 2
This is a follow up to my 2024 post about using the Dasung Paperlike HD-F e-ink monitor.
It’s spring again in Philadelphia, which means I’m dusting off my Dasung 13.3” Paperlike HD-F. This portable e-ink monitor allows me to work on my laptop outside, in full sunlight.
Since last year I’ve made some changes to improve my experience with the monitor.
Clearing the monitor screen programmatically
The monitor suffers from ghosting, where an after-image of the screen contents persists faintly. This can be annoying and reduce legibility, especially as it builds up over time. There’s a physical button on the front of the monitor that resets/clears the screen. I was looking for a software solution to clear it so that I could keep my hands on the keyboard and not have to press the button, which nudges the monitor from its position resting on top of the laptop screen.
In my previous post I reported that I couldn’t get Dasung’s PaperLikeClient software to work on my Macbook. That is still the case, but I discovered a way to clear the monitor using Lunar. With the Lunar CLI (which you can install via right clicking the GUI Lunar app menubar icon > Advanced features > Install CLI integration), you can clear the monitor using this command:
lunar ddc PaperlikeHD 0x08 0x0603
I put that into a Raycast script, so now clearing the screen is just a few keystrokes away.
Addressing flickering
A Reddit poster pointed out that Apple’s temporal dithering (FRC) causes some flickering on the Dasung monitor. I did notice this after they raised it, and I tried their suggested solution of using Stillcolor. Stillcolor does indeed turn off temporal dithering which resolved the flickering.
Securing external monitor to laptop screen
Last year, I had been using the Dasung monitor by basically resting it in front of the laptop’s built-in screen. This approach was less than ideal. First, the monitor would often slip and slide down over the keyboard, since there isn’t much lip to hold it up. Second, the monitor is rather heavy, and at certain angles it would make the laptop sreen fall open to its full extent. My temporary solution was to use a bag clip to hold the monitor in place. That only sort of solved the first problem, but it didn’t work that well.
My e-ink monitor setup circa 2024 In the fall, I roped in my mechanical engineer friend to draft and 3D print some pieces to help secure the monitor in this arrangement. We worked together on developing some hinges to (a) hold the monitor in place and (b) support the laptop screen at a specific angle.
Detail of the 3D-printed hinge/holder After a couple iterations, he produced these small, adjustable hinges.
- These feature a thumbnut to allow securing the laptop hinge at a specific angle, preventing the screen from falling fully open
- They have a pronounced vertical support that the base of the e-ink monitor rests upon, holding it up
- There’s also a slot to allow access to the ports
Close-up of one of the hinges in use I’ve only had the opportunity to use the monitor with these hinges a couple times, but so far they’re solving the problem splendidly. I’m looking forward to many days of working from the roof 🕶️
The hinges in use, supporting the e-ink monitor -
Mar 8, 2025
AI and the Uncertain Future of Work
Software can now do something that looks a lot like thinking. So, like many knowledge workers, I’ve been guessing about the implications of AI progress for my continued employability.
Let me start by saying that AI has already enhanced my experience as a computer user. I use ChatGPT for brainstorming, research, summarization, translation and simplification, phrasing and word-finding, cooking, trip planning, book recommendations, software development, self-reflection, and generally as a replacement for Google. At work specifically I use AI to augment what I do. It helps me understand code, write small, personal utility programs from scratch, write and refactor parts of large codebases, aid in code review, summarize text, help with writing and documentation, etc.
How fast and to what degree will AI replace aspects of my job as a technologist? What does a senior software engineer at a SaaS do all day? What would an AI system need to be capable of to displace its meat counterparts?
I think that today, transformer-based deep learning foundation models like those underpinning Claude, ChatGPT, and Gemini nearly have the required raw reasoning capabilities to fulfill many of the software delivery responsibilities of a typical web developer. A simple version of that sw delivery pipeline looks like this:
While AI tools can be prompted to do some subset of those tasks in isolation–iterate on product specs, design components, write pieces of code, maybe react to test output, etc.–I don’t know of any single system that can reliably do all of those things end to end and with minimal input. Yet.
In terms of writing software: smaller, fast models like Github Copilot can, for years now, complete basic statements and pattern-match boilerplate; newer chatbots can reason about substantial amounts of code and write complex modules end to end; emerging products like Cursor, Windsurf, and Aider write and modify large, interconnected components. It’s not hard to imagine a black box code modifier where a description of a change goes in and a PR with code, passing tests, and an explanation of the change comes out.
In real life, the job isn’t a clean assembly line. The various tasks form a densely interconnected graph; the dependencies between them are loops. There are also lots of other responsibilities not directly related to the goal of making software.
It might appear that there would need to be a large model capability improvement for AI to be able to autonomously handle all the job functions of even an average knowledge worker. I’m not convinced this is the case. Humans can use current AI to great effect by repeatedly prompting the system, incorporating outside information, and evaluating responses with empirical feedback from the world. Even if model progress halted, how much advancement could be made by incorporating feedback loops and tool use?
The AI capability gaps are shrinking month by month. Agentic AI systems–ones that operate with autonomy and goal-directed behavior–are in development. We see this with early prototypes of multi-modal, browser-use AI systems and protocols to interface with external systems. OpenAI has reportedly been planning specialized agents to the tune of $20k/month. It’s obviously a very hard problem with lots of ambiguity, but LLMs are rather good at reasoning around ambiguity. And the potential upside for the winners that emerge will be huge.
If an AI system gets scaffolding allowing it to integrate with arbitrary services, provision infrastructure, run scripts, read outputs, deploy code, ping colleagues, and maybe even use a credit card, might it be able to approximate the output of a human worker? Even if it can’t do all of those things or do them perfectly all the time, it could still decimate the workforce. Similarly to how self-driving car companies rely on remote human operators to provide guidance in exceptional situations, perhaps the knowledge worker of tomorrow will be dropped in to nudge an AI agent in the right direction.
Much of the work of software engineering involves person to person communication. We talk to product managers to understand customer pain points, we discuss tradeoffs with stakeholders, we interview candidates, and share our ideas with others. In a world where human knowledge workers are being phased out by AI, though, such communication work becomes less common. So some of the job responsibilities might become precipitously irrelevant.
All that said, I haven’t seen compelling products that can autonomously do entire jobs, except for maybe some support representative chat applications. I think it will be years before effective versions of such tools arrive, if they ever do. It’s conceivable that the tech will hit a plateau somewhere below the skill level required to take our jobs. Maybe it’s hopium or maybe I’m underestimating the rate of AI progress.
If the pace holds, though, then at some point, entry level desk jobs will begin to be commoditized. I imagine it’s already impacting contractors that provide basic graphic design services, copy writing and editing, data entry, candidate screening, website building, and so on. This deskilling will impact the course of career development as well, since expert professionals must necessarily first be novices. How might green new grads get the requisite experience to grow into seasoned positions if the intro roles have mostly gone to machines?
Software is eating the world, but AI is eating software. The industry has so far witnessed a monotonically increasing demand for software–as abstractive layer after layer enabled more software to be created more easily, it seems not to have lessened the demand for applications or the workers that produce them. But that software over the years was not writing itself… The technological advancement of recent AI feels like a difference in kind, not just degree.
A time may come when the art of computer programming is regarded as a historical eccentricity rather than as a useful skill. Mercifully, there will be a messy middle where untangling the mounds of vibecoder-generated spaghetti will require professional intervention; during this time skills like software engineering and debugging will be direly needed. Beyond that, as the artificial agents are given ever larger chunks of responsibility, who knows what exactly our role as human technologists will be.
How should software engineers prepare for the coming changes? What I am doing is paying attention and learning about AI tools: what they are, how to use them, where they succeed and where they fail. Workers effectively incorporating such tools into their practice will outperform those who resist. I don’t suggest dismissing these new capabilities as a fad. AI tools are here to stay, they’re getting more powerful and useful, and they are going to affect how we work. I believe that in the medium term, creative professionals that embrace AI will see their output increase and their tedium decrease. As usual, the world of software is changing, and we’ll have to adapt or die.
Is a life without white-collar workers really a life worth living?
Perhaps one day, entire companies will be run by AI agents, a simulacra of human behavior. Vaguely guided by the idea of an autonomous business, I built a toy version in my free time a few months ago: an AI-run t-shirt seller. The AI would read trending t-shirt product tags, use those to generate a new idea for a t-shirt design, it would generate an image based on the design, and then a bit of browser automation would upload that image to an on-demand shirt printer marketplace. I didn’t get around to the part where the program would remix the top selling designs to build a fashion empire, because the platform shut my account down after a couple hours.
The hardest part of the implementation was the finicky browser automation and working around captchas. Having a (reverse-engineered) API that allowed the AI-based program to upload t-shirt images gave it agency. I think we’ll see an increasing number of service providers offer API options where there had previously only been UIs, and we’ll also see UI to API translation layers enabled by AI. I imagine there might be centralized “business in a box” platforms that hook into services like Stripe, Intercom, Mailchimp, Shopify, and Docusign, giving AI agents access to a bevy of specialized tools without a human having to configure those one by one. Eventually, agentic systems will have no problem dealing with the remaining UIs directly.
This is the dream of business owners: a machine where you put in a dime and out comes a dollar. I think it’s hard to look at the advancements in AI and not see the enticing prospect of a money printer. Business owners will increasingly seek to replace human labor with AI because the latter is much cheaper and never rests.
If AI can replace software engineers, are any cognitive laborers safe? Until and if we have ASI, at the margins there will be people who can’t be replaced: those generating new knowledge, doing the most complex research, etc. Roles that today involve human interaction may not actually go unscathed. Grantors and customers may prefer to hear from an AI than to be plied by a salesperson. And those external humans may themselves be replaced by bots.
How do things look when AIs themselves run or mostly run companies? The most glaring downside would be the displacement of millions of human workers. Robbed of their livelihoods, where would these folks get the funds to buy the widgets being churned out by robots? The middle class would evaporate, leaving extreme inequality, with the few monstrously rich wielding armies of AIs, and the rest competing for the remaining physical jobs. Not to mention that AI could accelerate advancements in robotics, endangering even manual labor. Might AIs get legal designation as artificial persons, allowing them to own businesses, property, and other assets? Will we see AI politicians? Would an AI-run government be a dystopian nightmare or would it provide an antidote to today’s sprawling bureaucracies?
The market is an evolutionary environment not unlike the biosphere. It’s a habitat inside the noosphere occupied by firms. Evolution for animals optimizes the fitness function, selecting for species and individuals that can most effectively turn food into offspring:
The fitness function for firms is similar. Their resources are capital instead of food, and the optimization selects for firms that can most effectively turn that capital into growth:
These two processes exist in the same world. Us humans have hitherto had an integral and symbiotic relationship with firms, since we form them and they give us income. But if firms are increasingly run by artificial agents, the relationship changes. It becomes parasitic, adversarial. Humans and AI firms will compete for finite resources. The nature of evolutionary pressure will necessarily select for the most extractive firms. We see this today even with people-run businesses, but it will accelerate as AI firms proliferate, the pool of consumers dwindles, and antiquated human ethics take a backseat.
Regulation could mitigate a doomsday scenario, but if deployed too early it would limit useful progress and mostly benefit the largest companies. At first, aligned AIs will act in the creators’ interest, but, over time, selection will reward the least scrupulous. An AI firm that routes some of its profits to lobby for laws in its favor will do better than one that does not. Eventually, AI may develop goals that are alien to us. It may seek to explore the universe or to turn us into paperclips.
This is an extreme and pessimistic view of where technology is headed. Given super intelligent AIs with full autonomy, maybe it could happen. I don’t really think it will, though. If artificial intelligence is going to doom us, I suspect it will be more mundane than humanity getting outcompeted by a race of smart machines. It will be instead: nefarious actors using large scale deployments of AI agents to foment division, sowing propaganda and FUD, followed by reactive policies that strip rights and engender mistrust; algorithmically-refined ads masquerading as entertainment that soak up our precious time and attention, with none left for boredom, creativity, introspection, or critical thought; convincing AI fakes that deceive us–both willfully, exacerbating the social isolation epidemic, and unwillfully, feeding the $X00B/year scam industry. A common theme of these and similar problems is technology moving faster than our collective ability to adapt to it.
For now, I’m watching and waiting. And maintaining hope that the upshot could be net positive: technology that works with us and for us, preempts our requests, understands us, and gets out of our way. AI advancements may unlock powerful new tools for thought and enhance human cognition. They could uplift us and deliver a new promise of computing.
-
Apr 26, 2024
Using an E-Ink Monitor
April 2025 Edit: It’s a year later and I have some more tips for effectively using this monitor – Part 2.
I recently bought an e-ink monitor: Dasung’s 2019 13.3” Paperlike HD-F. This post covers my knee-jerk reactions after having used it for a few days. There are other more in-depth reviews of it online.
- I typically spend about six to twelve hours a day using my laptop, a MacBook Pro, mostly for coding, writing, and reading
- I got this e-ink monitor to help with eye strain, specifically when working outside in direct sunlight
- This Reddit post inspired me to get the Dasung monitor specifically
- Monitor visibility in sunlight is great
- Fully legible, even with polarized sunglasses
- Worked outside for a long time without squinting or eye strain
- I haven’t used it with the screen facing the sun; there might be some glare because there’s a glass covering.
- There’s a newer model than the one I have
- I bought this older 2019 model used, but it was still pretty expensive ($600 USD)
- Here’s some unofficial information about the various models
- The reported resolution when I plugged it in was 1100x825@2x
- Refresh rate is configurable with software, but the default is fine for my usage. No noticeable lag when typing. Scroll lag is cromulent. Cursor lag is bad. Watching videos is doable but I wouldn’t recommend it.
- Ghosting is annoying (see image below)
- There’s a button on the monitor to clear ghosting
- Dasung makes PaperLikeClient software (see screenshot)
- Apparently this lets you refresh the screen with a keyboard shortcut or on a timed trigger
- But didn’t work on my laptop (didn’t recognize connected monitor). I reached out to Dasung support, haven’t heard back yet.
- My understanding is that there’s a tradeoff between refresh speed and ghosting
- Visually, it’s really nothing like the the laptop’s built-in display (for better or worse)
- It has a backlight (the “F” in “HD-F” stands for front light; not all the models have this), which is nice. AFAICT you can’t change the brightness level without the client software. There are two color temp options accessible from the hardware buttons.
- Usable for programming
- I use an E-ink theme for VSCode
- Obviously not great if you do frontend work
- Physically kinda bulky
- I’ve been using a chip clip to hold the monitor to my laptop screen and prevent it from sliding
- Sometimes its heft makes the screen tip over; hopefully not damaging the laptop hinge
- I plug the USB and HDMI cables in via a USB-C adapter
- Guessing the battery life is improved since I keep the primary display off when using the monitor
- Blocks MBP camera :(
- Tweaking the computer display settings helps with legibility
- I manage this with a Shortcut to make connecting/disconnecting less painful
- Turn color filters on (grayscale 100%)
- Turn increase contrast on
- Turn reduce motion on
- Set (built-in display) brightness to 0%
- I manage this with a Shortcut to make connecting/disconnecting less painful
- Hard to strike a balance between text legibility and fine detail for icons, colors, etc
- Seems like I can focus for longer periods when using it?
- Maybe it’s a placebo
- Or the novelty of using it for work is fun
- Or the lower resolution forces me to concentrate on less information at once
- Or the fact that images and videos look crappier so there are fewer things to steal my attention
These past few days using the Dasung have been enjoyable. Time will tell whether this monitor becomes a daily driver for me, or an odd peripheral I dig out when I need to like invert a binary tree at the beach.
I look forward to e-ink technology developing. Less ghosting, faster refresh, higher resolution, full color support, lower price point, thinner/lighter profile, and better software integration would make this device really incredible. Today it almost feels like a prototype, but even with its current limitations, I can already envision how it might change the way I use my computer.
An integrated laptop experience would be interesting. I could see using an e-ink laptop for work, and maybe even as a personal device if the video-watching experience is compelling enough. The assumed battery life increase is an obvious win.
There’s something deeply appealing, to me at least, about the prospect of being able to use my computer comfortably, regardless of ambient light. Being able to see the screen at the park, the shore, the stoop, the deck, or even just by a sunny window makes the act of using a computer feel more human.
It might be a good thing that sunshine and being on my laptop have (so far) been mutually exclusive – why sully a nice day with The Algorithm? Maybe AI and/or VR will make this form factor obsolete one day. But it’s nice to picture a timeline where one can author a blog post from a lawn chair without squinting.
-
Jan 14, 2023
Rendering ChatGPT HTML Output Inline
TL;DR: With a little bit of glue, you can render and evaluate ChatGPT’s raw HTML and JavaScript output directly in the chat interface!!
While playing with ChatGPT, I found myself wanting to see its output.
ChatGPT can create HTML code, and it functions within a web browser. Since the browser is able to display HTML, is it possible to use it to display ChatGPT’s output?
Indeed, with just a bit of glue code we can see and interact with the output!
Here is some JS that, when evaluated, will render ChatGPT’s output (and yeah, ChatGPT helped me write this). Note I only tested this in Firefox.
function replaceLastCodeBlock() { var codeBlocks = document.getElementsByTagName("code"); var lastCodeBlock = codeBlocks[codeBlocks.length - 1]; if (!lastCodeBlock) { return; } var htmlContent = lastCodeBlock.innerText; var fragment = document.createRange().createContextualFragment(htmlContent); lastCodeBlock.parentNode.replaceChild(fragment, lastCodeBlock); var elements = document.getElementsByClassName("bg-black"); for (var i = 0; i < elements.length; i++) { elements[i].classList.remove("bg-black"); } } replaceLastCodeBlock();
Disclaimer: It’s probably not super secure to haphazardly evaluate code produced by a machine learning model; use at your own risk.
Let’s see some examples…
Here’s a simple showcase. The script above makes the browser render ChatGPT’s output.
CSS and JavaScript are evaluated.
ChatGPT can produce SVG code, which also can be rendered.
beautiful
Obligatory recursive ChatGPT in an iframe.
You can make the script feed input back into ChatGPT.
And data can be
fetch
ed from the internet.Combining data retrieval and feedback, you can jury-rig more advanced prompting with context. This is pretty brittle, and notice it erroneously outputs its knowledge cutoff year.
Animations can be rendered
A fun Game
ChatGPT’s ability to generate correct code is impressive in its own right. Being able to easily see and interact with the evalated artifacts of that textual output makes the tool more fun. Hopefully this little demo is thought-provoking, enjoy!
-
Sep 1, 2022
Responding to recruiter emails with GPT-3
If you’re just interested in the code, here it is.
Like many software engineers, each week I receive multiple emails from recruiters.
I’m grateful to work in a field with such opportunities, and I know that receiving a lot offers to interview is a good problem to have. But, practically, most of the time I’m not looking for a new job, and so handling all these emails is a recurring administrative task.
Here’s an example thread that I neglected to respond to:
I do try to respond to all recruiter emails with a short message that pretty much always follows a format like:
Hi <recruiter name>,
Thanks for reaching out! I’m not interested at this time, but I’ll keep <your company> in mind.
- Matt
There are a few reasons that I respond to these emails (rather than merely ignore them):
- It’s polite
- If I don’t respond, the recruiter will often send multiple follow-up emails
- Maintaining a cordial relationship with the recruiters is in my best interest for future job searches
I use some rough email filtering rules to funnel recruiter emails to an email folder. Then, when I have time, I go through the list of unreads and send my little response.
It would be ideal if I could automate sending these responses. Assuming I get four such emails per week and that it takes two minutes to read and respond to each one, automating this would save me about seven hours of administrative work per year.
A trivial approach would be to send a canned response. But a touch of personalization would aid in my goal of maintaining a good relationship with the recruiter.
Extracting the name of the recruiter and their company from the email using a rule-based approach / trying to parse the text would be really tricky and error prone. Luckily, OpenAI’s GPT-3 language model is quite good at processing this email.
Using the GPT-3 API, we can provide the recruiter’s email along with an example, and extract the required information. It can even format the output as JSON.
def get_recruiter_name_and_company(email_text: str): """Uses OpenAI text models to automatically parse the recruiter's name and company from their email.""" prompt = f""" Given an email from a recruiter, return the recruiter's first name and the recruiter's company's name formatted as valid JSON. Example: *** Email: ''' Hi Matt! This is Steve Jobs with Apple Computer Company! I'm interested in having you join our team here. ''' Response: {{"name": "Steve", "company": "Apple Computer Company"}} *** Email: ''' {email_text} ''' Response: """ # don't make expensive OpenAI API calls unless operating in production if not IS_PROD: return json.loads('{"name": "Steve", "company": "Apple Computer Company"}') completion = openai.Completion.create( model="text-davinci-002", prompt=textwrap.dedent(prompt), max_tokens=20, temperature=0, ) return json.loads(completion.choices[0].text)
Here’s an example from the OpenAI Playground.
With the recruiter’s name and company in hand, responding is just a matter of interpolating those variables into the body of my standard response template:
response = f"""\ Hi {recruiter_name or ""}, Thanks for reaching out! I'm not interested in new opportunities at this time, but I'll keep {recruiter_company or "your company"} in mind for the future. Thanks again, {SIGNATURE} """
IMAP and SMTP are used to interface with the mailbox. The rest of the code can be found in this repo.
This solution worked well for the handful of emails I tried it on. I’m planning to run this on a cron to save myself some time and automatically maintain recruiter relationships.
Recent Posts
- Generative AI Accelerates Our Exploration of the Search Space
- AI Assistant Use-Case: Performance Feedback
- Poor Man's Computer Use with Execute Arbitrary AppleScript MCP Server
- Please don’t disable paste
- Blogging via Email
- Using an E-Ink Monitor: Part 2
- AI and the Uncertain Future of Work
- Using an E-Ink Monitor
- Rendering ChatGPT HTML Output Inline
- Responding to recruiter emails with GPT-3