Tuesday, April 28, 2026

My Own Little 3D World

Over the years I have had the privilege to try my hand at several 3D CAD programs -- in particular, I worked with BRL CAD because I wanted to experiment with Finite Element Analysis, and when I took a course in Finite Element Analysis, the curriculum was based on Ansys. I have also gone through a LinkedIn course on Solidworks (albeit without access to the program itself), tinkered with Blender (but not too deeply, because Blender doesn't work nicely with Finite Element Analysis), and poked at OpenSCAD and tore my hair out when I tried FreeCAD -- but then it was just as it was transitioning from version 0.21 to version 1.0.0.

And, while it's not exactly a CAD program, several years ago I experimented with using quaternions to handle rotations in Python, with the unfulfilled ambition of creating a 3D version of Nibbles called Nibbles 3D.

I'm pretty sure there are other platforms I have encountered and played with, but I cannot think of what they are at this time.

So, while my experience with 3D CAD software has been spotty, I have nonetheless had enough time to find myself frustrated by several platforms. But despite my frustrations, I quickly discovered that, even when frustrating, CAD programs provide a valuable service: when I was first trying to learn BRL CAD, I struggled to convert PDF blueprints into BRL CAD shapes -- so decided I should pull out my trusty compass, protractor, and ruler, and try my hand at drawing the part the "old fashioned" way -- but I quickly found myself trying to pinch and twist the paper drawings trying to get different perspectives!

Why is it that 3D CAD programs rub me the wrong way? Simply put, I have yet to use a 3D CAD program that works the way I think. I tend to think in terms of points making lines and planes and even volumes, I want to describe curves and surfaces with equations, I would like to explore differential geometry and viewpoints that follow curves and other weird things. And I want to move in a way I find natural! But as far as I can see, CAD programs aren't all that configurable when it comes to movement. (Perhaps I haven't just explored enough -- FreeCAD in particular had several options for 3D space navigation ... and it's not implausible that they have a way to create your own conventions.)

I am not alone about my concerns with 3D CAD! One Franklin Veaux on Quora disputes the notion that 3D printing is popular; one of his gripes concerns the state of 3D CAD:
3D modeling programs are currently in about the state word processing was in back in the days of WordStar. There are basically two tiers of 3D modeling programs: primitive, limited programs with terrible user interfaces, and powerful, full-featured programs with UIs that are beyond terrible. I come from a time when it was considered “normal” to buy a commercial program and then copy-paste the BIOS hooks for your particular CP/M machine into the program and assemble it yourself, and modern 3D apps have the worst user experience I have ever encountered. When your program makes patching a CP/M app look user friendly, you’ve made some poor life choices.
In a Formlabs Short, Adam Savage (of "Tested" and "Mythbusters" fame) expressed his own concerns that 3D CAD makes it difficult to create aesthetically pleasing drawings and other things.

So, what the heck would I do differently? I would draw on my experience with Vim and Emacs and Tmux -- make the system highly configurable and extendible. I would draw from my experience playing Descent and make movement casual and completely controllable. I would further draw on game design, adding in real-time collision detection and a physics engine, and even attempt to run Finite Element Analysis in real time. I would like to blur the lines between the base computer language (Common Lisp), computer algebra systems, and computer graphics. When I want to do something, I would like to do it in a nice, intuitive, mathematical way -- a good portion of my frustrations with CAD comes from having to jump through hoops to do what I want to do, and in too many cases, what I want to do is altogether impossible -- but I should be able to define a line anywhere, darn it! and put notes and stuff wherever I want!

I admit to approaching the subject with a tad bit of naivety -- I'm sure that *some* of this has been implemented in *some* CAD systems -- but not really. If I design a part, I want to just "drop it in place" (in some cases, literally), and let the geometry, the physics engines, and whatever artificial constraints I wish to throw in or deactivate at the last minute, all as a natural part of the design process. I cannot do that with any system currently available.

And, having said all that, I'm not even sure I'm "the one" who will make 3D CAD intuitive -- I am, after all, using freaking Common Lisp and Vim and Emacs as my inspiration! I have reasons for doing that, to be sure, but those reasons deserve more explanation.

Perhaps all this is just an excuse -- a way to justify starting from scratch with "first principles", and see what can grow from there -- after all, we live in a world where pretty much everything seems to have been already discovered, debugged, and packaged nicely in little black boxes that let us be ignorant of what's inside -- and, to be fair, to some degree that ignorance is justified, even necessary, if merely because the world is a complex and overwhelming place, and it's impossible to implement All the Things anyway.

Then again, if no one ever starts out from first principles, to learn what we already know, how in the world can we push against the frontiers of knowledge, to extend them beyond what we currently know and understand?

Monday, April 27, 2026

Work as a Junior Engineer

I mentioned in my last blog post that I have started a position as a "Junior Engineer". I did not seek this position out. My wife found it in online Classifieds and sent it to me, about two or three weeks after I cynically applied to about a dozen places to satisfy the terms of applying for unemployment deferment for student loans. (I don't like doing that, because I don't particularly want to work full time, and I don't even know if I can even work full time at this point, but every time I apply, I ask myself "if the job is offered, would I take it and give it my best shot?" and so long as I answer "yes", I figure it's just cynicism and not dishonesty.) I applied as a lark because the position seemed interesting, and I figured that because I lacked a mechanical engineering degree, I wouldn't be offered anything anyway.

In the course of about two weeks, I went through a quick interview process that ended with me being hired!

What the heck am I doing there? As I said before, I applied on a lark not expecting to get this position, so I don't know. My employer, however, thought that between my mathematical background, my tinkering with 3D graphics and CAD, and my software engineering experience, it should be enough for me to take on some of his clients' projects so that he could focus more of his own time on preparing a drone for DARPA. So far at least, he's been right! Right now, I'm working on a project to try to push the limits of physics to figure out if we could recreate a free energy device that the client was convinced he created a couple of decades ago.

What have I learned so far? I really, really, really like 3D printing! I have also learned how to program ESP32 microcontrollers -- which has also reinforced my irritation with C and C++. I have proven to myself I can use CAD programs just fine, although probably in ways that make mechanical engineers cringe -- and although I appreciate OnShape, it reinforces to myself that maybe I can do better. I have learned (from listening to discussions about a coworker's project) that ABS plastic isn't food safe -- which explains why I felt so sick that one time I ate a bowl of Legos. And I have learned I can really, really really enjoy my work ... but still be completely miserable! I have also learned how much time I've been spending on both chores and getting my children to do their chores (mostly by how much chores have just pretty much stopped when I started working full time) -- and thus, if I want to do more of my own things, I need to find ways to get my children to do more chores, and to encourage them to do things on their own accord.

How long can this last? I don't know. My employer will be moving soon, and as much as I'd like to follow him, I don't think I could justify it when I'm only making $20/hour and it would be disruptive for the rest of my family. What's worse, however, is that full-time work completely drains my energy. Unless I can figure out how to work full-time (or whether I need to cut back on hours) and still sleep when I need to, work on my own things, I cannot help but worry that I am going to run myself into the ground.

Free energy, seriously? Isn't that a waste of time? It probably is, but I figure that between the tiny possibility that the client might be right about this, and all the microcontroller and 3D CAD work I've been learning, and the cynical fact that someone is willing to pay for this, even if the task ultimately proves unfruitful, I cannot help but appreciate the opportunity!

Sunday, April 26, 2026

Trying Again, But Maybe Smaller!

I still aintn't dead, but I've been at various states of "invisibly active" and "invisibly inactive" for the past several months. I have done a lot of research into dual quaternions until it petered out, and I have started new work as a "junior engineer" -- I enjoy the work, but it leaves me with absolutely no energy when I am done, making me wonder "how can I both enjoy my work and be miserable at the same time?". This only reinforces my notion that I cannot work full time, even though I kindof want to.

Having said that, I have started taking lisdexamphetamine, and I am amazed at how well I could focus on my work compared to all the other times I've worked. At one point, I tried guanfacine as well, but it made me tired and my mind as slow as molasses (which, in turn, made me completely unproductive the two months I was trying it).

I tried guanfacine because I was blaming my inability to sleep on racing thoughts and distractability I'd get when lisdexamfetamine wore off (as if my mind were trying to "catch up" on all the lost "turbo mode" thinking that had been dampened during the day) -- but it seems that no matter what I try, no matter how tired I am, I can only go to bed at around 2am Mountain Time. At this point, I think I need to figure out how to sleep in, or maybe how I could have a "sunrise" at 3am to reset my circadian rhythms to a time that fits in the schedules of everyone around me.

Naturally, I've thought about burnout a lot during these last few months, and from what I have learned, I get to experience four different types of burnout, often mixed together at the same time! As an autistic, I get to be burned out when I have to socialize a lot, and am never sure if I'm saying or doing the right things. As an ADHD dopaminer, I naturally over-extend myself until I am overwhelmed. As an employee (heck, as a stay-at-home Dad) I get the imbalance of the tasks I'm expected to do vs the "pay" I "get" (I put these in quotes because draining my energy on making doctor's appointments or working overtime leave me little energy to spend time with family or pursue my own projects and ideas, things that no amount of money can make up for!). Finally -- and this has only become obvious to me as I've slowly watched my ability to think deteriorate over the last few weeks of staying up late, waking up early, coming home to nap, and still being too tired -- I have realized that I'm burning out on being unable to sleep the way I am wired to sleep!

So, where does that leave me? My current position is temporary unless I could figure out how to move with my employer when he moves to be closer to family (and thus to flight testing grounds!); for my part, I want to follow him, but I cannot continue working the way I am currently trying to do. I need to figure out how to sleep in more often, and how to be more productive in the evening, and how to carve out time for myself. Perhaps part-time semi-remote is in my future -- perhaps I need to seek "funding" for research, likely via small donations, and forge ahead with the things I have always wanted to do.

One thing in particular that has been on my mind: When I try to write a blog post a day, I tell myself "I'll only spend an hour" and end up spending all day writing up something. I think I need to step back and figure out how to do small blog posts, and maybe take only one day per week or month or somewhere in between to work on something bigger. I cannot help but think that if I am going to ask for donations, I need to do "something" for those who donate!

Sunday, March 9, 2025

I Aintn't Dead! I'm even ok, all things considered!

I aintn't dead!

And while this has been a rough week -- sickness is going through our family, and while I have had a handful of days where I had little to no energy to do anything, I can happily say I haven't had any days where I wondered if I was going to die.  I've had colds (and other diseases) that have been worse than this.

Nonetheless, this little sickness has been severe enough for a daughter to be taken to the emergency room, for a cough that wouldn't let up, resulting in a lot of vomiting.  It's under control now!  And it while it didn't get to the level of "scary", it's nonetheless yet another thing eating up energy.

All this has been somewhat of a blow to my efforts to find the right rhythm for my blogging, but then again, it's also given me time to think of how I might want to approach things.  In particular, I am trying to figure out how to balance limited energy with my desires to info-dump my thoughts (some of which have been maturing over a period of years, some of which are picked fresh from last week), and my desires to make and design things.  My goal has been to throw out a blog post every day, and then work on a project for the rest of the day.

When the blog posts are complex, however, this doesn't work very well!  (And it doesn't seem to matter that some of those things are things I've thought about for years!)  I may have to limit those to one per week.  If I want to post more often than that, I think I need to figure out how to get into the habit of throwing out an occasional pithy post.

Meanwhile, I've spent the last two or four days working on translating a cardboard "computer easel" into a FreeCAD model that can hopefully be sent to a manufacturer.  While I want to design my own CAD system from the ground up, I have also been wanting to get familiar with FreeCAD, both because it may prove that my "vision" is redundant, and because I could take lessons from what I didn't like about my experience.  This project has been very helpful to that end!  (Albeit with more than one frustration along the way.)

I've also been thinking occasionally about how I ought to do another Identity Management post.  The first "molecule" I want to discuss isn't necessarily an Identity Management thing, although it draws from the Elements:  it's a data structure!  But it illustrates what can be done with signing hashes.

And I've been pondering the Algebra series I've started.  I intended to throw out a handful of rules and their explanations, with the (yet untested) notion that they'll be useful for getting comfortable with algebra ... but after I described the notion of "symbols", it occurred to me that if I'm going to say a certain symbol is a "number", it would be helpful (and daresay important) to lay a foundation for just how those darn things work, anyway!  And, of course, there's always a balancing act between figuring out how deep a concept should be explored, and how many deep ideas ought to be separate concepts.

For example, when I introduce multiplication, I might have to resist the temptation to dive into the distributive law, because it may make more sense to wait a bit later, when I have a better place to bring up its motivation -- in particular, I'm coming around to thinking that it's weird to describe why you'd want to multiply "6" with "5+3" when it's pretty obvious that "5+3" should just be "8"! -- but the motivation becomes obvious when you want to multiply "6" by "x + 3", because now you don't have a nice means to "simplify" things -- it's as simple as you can get it, until you figure out what "x" might be!

Friday, February 28, 2025

Initiating New Projects via NixOS

I have a confession to make!  I'm kindof get delayed whenever I start a new project.  It can take a little while for me to set things up.

Several years ago, I worked at a cryptocurrency company -- and as a company, we had two goals.  The first is to find a "consensus algorithm" that would be able to approve transactions at about the rate that Visa and MasterCard can approve transactions -- because waiting a day for a transaction to complete (which is where Bitcoin was at the time) is kindof unacceptable for something that's supposed to be used as currency!  The second?  To explore other things that cryptocurrency can do -- and among them was the possibility of putting software on the cloud, compiling it, and trading it, all managed by cryptocurrency transactions and smart contracts -- and one package management system we were encouraged to investigate for inspiration was NixOS.  And I fell in love with it!

Now, just what is this NixOS thing?  It's actually several things:  a configuration language, a package manager, and an operating system, and maybe a thing or two besides.  As a language -- it is weird and complex, and a source of great headaches! -- but it's also where its power lies -- the language can specify exactly what you need, and customize things very precisely.  As a package manager, you can set it up on any Unix-like system and install NixOS packages that will work on the system (which, for Linux, means pretty much anything, because it's a Linux distribution, after all, but for Mac OS X, there's a *lot* of stuff available for installation!).  And as an operating system -- well, it's a Linux OS, after all, so you can install it on your computer.

I initially used NixOS as a package manager for Debian and for Mac OS X, but I have "graduated" to installing NixOS on a computer itself.  It was a bit of a challenge, but I don't regret it!  NixOS has  a solution for something that has annoyed me about other Linux distributions (and Mac OS X, too!):  whenever I have a "blank" operating system, I have to remember the applications I installed before -- and while I try to keep notes of what those applications are, sometimes I forget to update the list, and often I have to just "get to work" and run into a situation where I need Package X, but discover it's not installed, and take a moment to install it.

The only advantage to this approach is that, every time I have to install a new version, some of the older software I'm no longer using (often because it was a "one off" for an exploratory workshop or installed out of curiosity) "disappears" simply because I don't get around to re-installing it.

With NixOS, I can specify all the packages I want installed in a single configuration file -- or, if it gets complicated enough, I can break it up into several smaller ones -- and I can also include information on user accounts and preferences for each package!

Yet, even with NixOS on my system, I still insist on creating a little "shell.nix" whenever I start a new project.  Take my "HIVE" project, for example -- it's intended to be written in Common Lisp, but because it uses OpenGL, I need external GPU drivers and libraries installed as well -- that little "shell.nix" allows me to create a custom command line shell that installs SBCL and these libraries, and even sets up needed environment variables to make sure everything works.  I can specify the version of SBCL I'd like to use, the versions of the libraries, and anything else I might need -- and all these things are independent of the OS I'm currently running!

This is much like the "virtual environments" that computer languages like Python and Ruby use, so that you don't get stuck with the out-of-date operating system version -- or, if your operating system is updated to a newer version, you don't have to get stuck with a project that no longer works because of breaking language changes.  This is particularly valuable when you have a "legacy" project that you don't yet have time to update, and you're wanting to start a new project using a later version.

And this has, interestingly enough, also solved the "temporary package" problem I had before -- I can use a "shell.nix" file to temporarily install an application or two for a particular workshop -- or I can even do something like "nix-shell -p gimp" to drop me into a command line shell where Gimp is temporarily installed -- and once I close that shell, Gimp is no longer available.  (Well, technically, it kindof is still available -- NixOS doesn't automatically delete temporarily-installed applications -- so, assuming I want to use the same version of Gimp I used before, NixOS doesn't necessarily have to reinstall it the next time I use "nix-shell -p gimp".)

So, whenever I embark on a new adventure, one of the first things I do is create (or more likely copy) a "shell.nix" file, and start figuring out what I need for my project.  In the case of a "computer easel", I want to use FreeCAD, which needs Python -- and since I want to keep all the data for running FreeCAD "local" to the project, I had to take some time to figure out how to set up environment variables to inform FreeCAD where my "home", "data", and "tmp" directories were -- and I had to figure out where I wanted them.

But I have that working now -- and, as a bonus feature, if I wanted to share my FreeCAD configuration with others, I think I just have to share this "shell.nix" file and the above directories.  I'm not 100% sure about that, though, because I'm not 100% certain if I figured out where FreeCAD keeps all of its configuration.

Overall, though, this allows me to have complete-ish control over the setup.  While I have had some surprises over the years, even with NixOS, I have nonetheless appreciated having a single spot where I can maintain a project's dependencies, without having to figure out what's on the particular system I'm currently using!

Wednesday, February 26, 2025

Motivation!

So, for the past few days, I've been trying to overcome a combination of repeated interruptions by various errands and chores, and a strong sense of "Pervasive Drive for Autonomy" where apparently I cannot do something because someone is putting a demand on me, and mentally, I'm not prepared to fulfill people's demands -- even if that person making the demands is me, and the demands are work on one of those things you know you want to work on, darn it!

This post isn't about my motivation to do things, though.  This morning, as I was settling in to finally start a project, deciding to watch a couple of videos before actually starting it, thinking I'm not even sure if I'm going to write a blogpost (or even just post something) -- because I'm afraid that a blogpost takes away precious energy needed to work on projects -- and besides, I'm not addicted to videos, I can stop any time! -- I came across The Problem With Math Textbooks on Youtube Shorts that resonated greatly with me.

The TL;DW (too long, didn't watch -- wait, isn't this a short video? -- well, maybe it won't be there by the time you internet archaeologists get to this post) summary:

Pure Math textbooks delve right into the axioms, which is a problem, because students are left thinking that we could just pluck axioms from thin air, giving us infinite possibilities.  Where do these axioms come from?  We need to describe the motivation that led to these axioms!

This is, indeed, an approach I've been wanting to take with mathematics for years.  When I took a "Physics for Scientists and Engineers" class as an undergrad, my room mate was explaining that he was taking the Physics class that didn't use calculus -- and thus, the math was significantly harder! -- and this led me to the conclusion that both physics and calculus would benefit if they're taught as physics gives birth to calculus -- or, perhaps, rather, as both are given life as twins!

But I was initially at a loss as to how to find motivation for everything else -- when I realized I had answered this for myself years ago too!  The motivation comes from the history of mathematics.

  • Euclid's Geometry was motivated by an attempt to standardize the measurement of the Earth (hence the geo of geometry!) -- and its alternatives were motivated by attempts to prove that Euclid's axioms were the only alternative, only later to be discovered that they have their own physical analogs.
  • Calculus was motivated by physics, and each refinement to the idea by mathematicians like Euler, Riemann, Gauss, and Lebesgue, were done to address philosophical concerns, and to refine the techniques.
  • Modern Abstract Algebra was motivated by solving the classical Greek problems of trisecting the circle, doubling the cube, and squaring the circle, using only a straight-edge and compass.
As for everything else?  Well ... I'm not sure if I can tell you ... because I'm not as confident on the history as I'd like to be.  The problem the field of Mathematics has is that, as the math becomes more refined and purified, the older techniques are jettisoned -- and little to no effort is put into understanding the history!  A good example of this is in Calculus itself -- everyone who goes through the mathematics classes know about epsilon and delta proofs (My blog's nom de plume "Epsilon Given" is taken from this!) -- but far fewer know about Newton's fluxions or Euler's infinite and infinitismal numbers, among other approaches to the subject.

Indeed, my own understanding of the history of mathematics is a mixture of "The History and Philosophy of Math" class I took in my first year of college, and being self-taught.

To this end, I have spent some time trying to collect older mathematical works by early mathematicians, with the hope of exploring the more "intuitive yet unrefined" approaches to mathematics.  I have Euler's Elements, a work or two by Archimedes and another mathematician I can't remember, Euler's "Introduction to Algebra", a book of Leibnitz's works on calculus, and Sir Isaac Newton's work on calculus, A Treatise of the Method of Fluxions and Infinite Series, With its Application to the Geometry of Curve Lines -- wait, shouldn't that have been Principia?  Well, that was Newton's physics book, published in Latin, but using complex, difficult, and sometimes incorrect "simple" math, because Newton was highly jealous of calculus during his lifetime -- and thus, A Treatise of the Method of Fluxions was published posthumously in English (albeit translated from Latin).

Sometimes the motivation is simply "I don't know.  It seemed like an interesting problem at the time!"  And that, too, is good, because it's a reminder that sometimes we just have to play, and see where are games take us!

Monday, February 24, 2025

Thoughts on Practical Interface Design

Apple has some interesting notions about software interface design that I find amusing -- and also deeply irritating.  I appreciate their reasoning, but the axioms they use as a basis for their reasoning are deeply flawed!

One is "the five closest points to the mouse cursor are where the cursor is, and the five corners of the monitor".  This makes intuitive sense -- because you can "fling" a cursor to a corner, and it would come to a hard stop -- and this is the foundation for putting the menu of the active app on the top of the screen, rather than at the top of the window.

The problem with this, however, is that it only makes sense when the screen is the size of a postcard (which was literally true of the first Macintoshes -- ok, maybe my memory is skewed here, but they nonetheless were rather small) or maybe even the size of a VGA monitor -- but, as I discovered when one of my employers provided me a nice, giant, curved monitor, and a Mac laptop that I could plug into that monitor ... this entire dynamic changes!  When you're on the lower right corner, this principle puts the menu up in the upper-left corner, and when you can literally choose between "distance moved on screen" and "distance as the crow flies" to describe the distance one needs to travel ... and where it might even be reasonable to describe said distance in "yards" or at least "feet" instead of "inches" -- all of the sudden, this one principle requires me to pick up my mouse several times to reach the menu.

What's worse, I have come to appreciate a feature under Linux, where I could hover my mouse over a partially covered window, and it becomes "active" without pulling it to the top -- such a feature is useful to have a browser with helpful information over a command line window where you're trying to use that information -- and while this can be sortof emulated under Mac OS X, it can only be partially emulated, because if you had this feature, and needed to cross several windows to access the menu, the menu would have changed several times by the time you get to it!

Fortunately for me, when I was given a curved screen, my employer also provided me a desktop computer, and promptly had me install Linux, so I was able to install KDE, which gave me the Windows-style convention of "menu on each window".  It's tempting to say that this is Linux's style, or at least KDE's, but it's more accurate to say that the driving force behind Linux user interface design is flexibility.  Indeed, if I preferred MacOS's design decisions, I can easily find them in Gnome, an alternative to KDE.

Another principle is "we have studies that show it's faster to use the mouse than it is to use the keyboard to edit text, but everyone thinks it's faster to use the keyboard".  The "studies" they rely on involve asking random people to do mundane tasks like "go through this paragraph and replace every 'e' with an underscore '_' -- and, surprise, it's easier and faster to do this with "point and click" than it is to arrow down to each letter, and then make the replacement.

Of course, as an avid user of Vim, I cannot help but ask "Why not just visually select the paragraph, type 's/e/_/gc', and after highlighting all the matches, manually approve one or two of the search-and-replaces for a couple of of times, and then hit 'a' to change all other matches when you're satisfied with the result?"  The Apple response to this, though, is "We're designing an interface for the 'average' person, not the power user!" but the proper response to this is "Yes, it's nice to have simple-yet-painful interfaces for the 'average' person who is going to do anything only once or twice -- but we need to cater for the power users, too, because eventually, in at least some tasks, the 'average' person is going to want to cross over the line into 'power user'!"

So, yeah, I'm not entirely a big fan of Apple's user interface design principles.  They sound good in the abstract, but they have led the designers astray to produce some awful designs!

Since I'm in the process of trying to figure out how to create my own Dual-Quaternion based 3D CAD-like system, I've given some thought about how things ought to be designed ... and I think the guiding principle I'm most attracted to is flexibility:  Don't try to predict what any particular user is going to need, instead, provide the tools necessary for the user to create and customize their own interface!

The fundamental principles behind all these are perhaps the most important:  first, enable flexibility, second, put as many things under your finger tips as possible, third, it's nice to be able to select anything and everything and copy them, and fourth, don't clobber user data -- and everything is user data.  I also have a couple of principles than I don't yet know how I'll implement -- the fifth is take advantage of the strengths of every input method (for years, the default treatment of touch screens under Linux has been to treat the touch screen as another mouse, which is annoying when you touch the upper half of the touch screen, and it sends the information to the second monitor that isn't a touch screen), as well as a sixth, the user should always have complete control over the program (which will probably need me to figure out how to install and use Real Time Linux).

So, with these principles in mind, I have had the following generic thoughts about the user interfaces I'd like to try to implement:

  • Anything that a user can do, to the practical extent possible, should be captured in undo trees of some sort -- perhaps even made version-controllable,
  • All functions of an application ought to be available to the user, to be bound to any key, or touch, or mouse movement, or gesture, available to the user (as inspired by Emacs),
  • The command line is a special interface:  it allows us to describe what we are trying to do with text, and this enables scripting, as well,
  • For every workflow, it should be possible to work out a "language" that can translate to keys on a keyboard -- much like how Vim approaches text editing -- although that "language" might differ from team to team, or even individual to individual, or project to project,
  • There is only one place closest to the cursor, and that's where the cursor is at this given moment.  It should be simple to pop up a circular menu at that particular point, probably by right-click, which can then open up to other circular menus -- and every such menu should have a "computer" icon on top (at 12 o'clock), an "application" menu just to the right (at 11 o'clock), an "environment" level just to the right at that (10 o'clock), and an exit button at the very center,
  • Perhaps every "icon" should have a character point in UTF, to the extent possible, and every help text, warning, and possibly graphic should be selectable by mouse and copyable to the right medium,
  • It should be possible to create menu-ish panels that consist of easy-to-access functions, information to be watched or examined, and icons to access various things; every such panel should be "locked" into place, with a little "lock" icon, that must be explicitly unlocked before menu items can be added or removed, or the panel itself be moved or resized (I have been both impressed by Ansys's ability to manipulate "default" menu and information panels, and annoyed by how easy it could be to accidentally change them, without knowing how to undo the changes, or even having the option to undo them! which, to be fair, I think Ansys has, I just don't remember how to use it),
  • It should be possible to view all panels available at any given moment, even if some of them are hidden, via some sort of "explosion" mechanism that keeps everything in place unless someone moves them (much like MacOS and KDE has for windows, although both seem to randomize what they show, and neither provides ways to organize the windows on the screen in this mode, without mechanisms to preserve these changes between "explosions"),
  • Everything associated with a project should be kept in a structured file format that can be explored by command-line tools, and in particular, as text files where possible, and while automatic strategies can be provided for inserting new things into these files, any changes made by a user needs to be respected -- and any changes that a user might do, that would break the system (eg, syntax errors, system display changes, etc) needs to be handled gracefully by the system -- so that the user will feel free to experiment without fear of everything coming crashing down (to the extent possible -- we are dealing with complex systems, after all, and we cannot fully understand what we are doing!) -- in other words, as I currently envision it, I intend my projects to be "text editors" that can keep track and edit "non-text" information.
I think some of these are contradictory, some of them may prove to be impractical, and while I have given these notions a lot of thought over the years (well, some of them -- my menu and panel ideas are relatively new), I do not know how they will work out in practice.

But then, if I knew what I was doing, it wouldn't be research, now, would it?