One of the “ice breaker” type of questions I ask in beginning computer science courses is “What was the first computers that really got you interested in programming?. That leads us to talking about the history of computing and how computing has evolved over the past century.
The two machines that moved me from the “this is fun” to the “this can be a profession” mindset were computers we had in the lab I was part of in my first attempt at graduate school: the Tektronix 4406 and Texas Instruments Explorer LISP Workstation.
Most people know Tektronix for their test equipment, with their oscilloscopes being the thing that most people remember. They still make decent gear. But in the mid-to-late 1980s, Tek was a bit of a corporate dilettante with their hands in a number of things. One of those was a foray into the workstation market with machines based on the Motorola 68000 processors.
Tek was an early player in the Smalltalk ecosystem and by late 1980s was producing a Smalltalk focused 68000 workstation that ran a rather hacked-up version of Bell Labs Version 7 UNIX. My M.S. thesis advisor was an A.I. sort and had managed to get grant money to get one of these machines. He had moved on to other things by this point and just told his graduate students to have fun with it. I had been following Smalltalk since reading a popular press article on the Dynabook project and was seriously geeked out about being able to play with it. This was my first big dive into the Smalltalk language, the Smalltalk programming environment, and the UNIX operating system.
The thing that had captured my advisor’s attention was a TI Explorer LX Lisp Machine. LISP Machines were interesting beasts as their processors and architecture was designed and optimized to run LISP. Most of the CAD tools for chip design from this period were built using LISP and so TI licensed a design from LISP Machines Int’l. to create the TI Explorer product line. One of the innovations in this design was that it used the NuBus bus architecture, which was one of the early expansion bus architectures. So the LX included a co-processor card that that was basically an independent 68000 UNIX workstation running AT&T UNIX System III.
My boss told me on my first lab in the lab: “Make it work.” So what do I on my first day? Inadvertently do a “rm -rf *” in the root directory on the UNIX co-processor after spending about four hours loading the OS image from tape. Oops, but what can I say as I was definitely a noob at the time.
Fun machine, tho’, as I got to experience the LISP Machine environment and run EMACS as God and Stallman originally intended. A large part of the really weird stuff we see today in GNU/Emacs was a straight UX port from the LISP Machine. Things like the “CTRL-Windows-Alt” modifier keys (CTRL-SUPER-META on the LM). And the operating system was written entirely in LISP and you had the complete source code. And a lot of my sysadmin experience came from having to figure out to make that co-processor work.
One of the things that got me hired at NCR was they were looking for people with Smalltalk experience. And a lot of the people who worked on the Smalltalk team at Tek migrated to NCR when NCR tried to build a UX Research Center in Atlanta in the late 80s-early 90s. And getting exposed early to these programming environments and people who knew how to use them made me a better programmer.
Selah.
]]>git init
git add README.md ,gitignore
git commit -m "first commit"
git branch -M main
git remote add origin https://github.com/adamwadelewis/gallery.git
git push -u origin main
It’s a viable alternative. Just like with PHP and other things, using a package manager like Homebrew to install the web server in user space means that you can keep up a lot more rapidly with upstream changes in Apache. Down side is you have to undo some things in the system and remember to confirm and redo these things when you update macOS.
Let’s go about it… we need to turn off the version of Apache included in the operating system:
sudo apachectl stop
sudo launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist 2>/dev/null
This turns off the server and unloads the system install of Apache from the list of services loaded at system startup.
Now we get things up and running with Homebrew:
brew install httpd
brew services start httpd
One more configuration adjustment: to keep things somewhat clean, Homebrew’s Apache installer defaults to run on port 8080 rather than port 80. This avoids conflicts issues between the system install of Apache and the Homebrew version. But we want the Homebrew version to server pages on the default port.
How to fix this, edit /opt/homebrew/etc/httpd/httpd.conf
to switch the listen port to port 80. Use your favorite edit to find the Listen
line in the file and make certain it looks like this:
Listen 8080
Now restart the server using brew services restart httpd
.
There is a common thread among the different blogs telling you how to do this that recommends you reset the server root to the Sites
folder in your home folder. This is not something I recommend you to do as I believe you need to have a separation between production and development code. Feel free to reconfigure Apache in this manner as you wish but I’ll leave figuring how to do this to be an exercise for the reader.
Do go back and apply the changes I introduced in Part 1 of this series to get your home folder configured. All you need to do is adjust the files names to use the configuration files in your Homebrew install.
]]>Time to express an opinion: I am not a PHP fan. It’s a kludge of a language built on top of a kludge of web application architecture. But enough back-end stuff remains built on that architecture that one has to understand it and sometimes support it.
For macOS, the simplest way to manage installing and configurating PHP is to use a package manager like Home-brew. It’s a one line install command:
/bin/bash -c “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)”
Worth noting that is a lot of gooey, cruchy open-source goodness in the Homebrew repository.
At this point, you can install PHP:
brew install php
This will get you PHP8. If you need earlier versions, then you will want to use one of the specific version casks in Brew.
We want to configure our Apache install to use PHP. H
Time to hack! Configuring Apache
Things begin by adjusting our Apache configuration to load PHP. Edit the Apache httpd.conf
configuration file:
Sudo vi /etc/apache2/httpd.conf
Add the following:
LoadModule php_module /opt/homebrew/opt/php/lib/httpd/modules/libphp.so
<FilesMatch \.php$>
SetHandler application/x-httpd-php
</FilesMatch>
Confirm that the DirectoryIndex
entry includes “index.php
”:
DirectoryIndex index.php index.html
Here is where Apple tightening up security in macOS 12 bits us in the posterior: Homebrew’s stuff isn’t code-signed. Means that Apache will puke upon us when it tries to load the PHP module.
We have to manually sign the package. This requires some finagling with keychains using the Keychain Access utility and the Xcode command-line tools.
Here things get “interesting”. We need to adjust macOS to allow us to self-sign certificates. In other words, we get to be our own “Certificate Authority”.
Launch the macOS Keychain Access
utility:
Now goto Keychain Assistant>Certificate Assistant>Create A Certificate Authority
. You should see something that looks like this:
Do the following:
Sign the PHP module using the Xcode command-line code signing tool (replacing ”AWL“ as required):
codesign –sign ”AWL“ –force –keychain ~/Library/Keychains/login.keychain-db /opt/homebrew/opt/php/lib/httpd/modules/libphp.so
Now again edit the Apache httpd.conf
file and adjust the entry for PHP as below (again, replacing ”AWL“ with what you used in the certificate):
LoadModule php_module /opt/homebrew/opt/php/lib/httpd/modules/libphp.so ”AWL"
Now restart Apache and you should be ready to rock and roll:
sudo apachectl -k restart
Selah.
]]>There are some things you will need to add. Recent versions of macOS no longer include PHP as part of the operating system. That’s actually a good thing as the tendency has been for the version included has tended to lag behind the current tip of development. Here’s where you find yourself using a package manager like Homebrew.
The OS includes the Apache web server. It’s buried pretty deep into the system bits so you are going to having to apply some important skills:
,
This information is spread over the Internet but the bulk of the material is taken from the Apple tech support discussion forum at https://discussions.apple.com/docs/DOC-250004361
Start by editing the web server configuration file located at /etc/apache2/httpd.conf. Note that this requires admin permissions, so you will need to use sudo:
sudo vi /etc/apache2/httpd.conf
Look for the line in the file that enables the “Sites” folder for individual users (this is equivalent for macOS as public_html is for Linux). In recent versions of macOS, this is at line 184 in the file. Uncomment that line by removing the leading “#” comment indicator. It needs to read as follows:
Include /private/etc/apache2/extra/httpd-userdir.conf
Save and exit the editor.
This change tells the web server to look for an include configuration file that will define user folders in the web server. Edit that file:
sudo vi /etc/apaches2/extra/httpd-userdir.conf
Uncomment line 16 of that file so that it reads:
Include /private/etc/apache2/users/*.conf
Now you need to tell the web server that about your user folders. First step, look in the Users and Group preferences pane to get your short user name. Right click on your user name in the pref pane and select “Advanced Options”. The short user name can be found in the “Account Name” field. For discussion purposes, we’ll use my short name of “alewis”. Replace this with your own when you do this,.
Now let’s use your tex editor to create a configuration file for user folders:
sudo vi /etc/apache2/users/alewis.conf
Add the following content:
<Directory “/Users//Sites/”>
AddLanguage en .en
Options Indexes MultiViews FollowSymLinks ExecCGI
AllowOverride None
Require host localhost
</Directory>
You will need to add additional configuration items to this file if, for example, you want to enable PHP.
Then create the Sites folder in your home folder:
mkdir ~/Sites
echo “<html><body><h1> My site works</h1></body></html>” > ~/Sites/index.html.en
This creates the Sites folder and adds a minimal working example in the folder that we can test against shortly.
Now we have to do some shell voodoo. Apple has tightened up the security in macOS 12 to, by default, not allow other users access to a user’s folders,. The macOS installation of Apache is configured to run in a special hidden user account named “_www”. You need to setup an Access Control List (acl) that lets the web server have access to your folder:
chmod +a “_www allow execute” ~/Sites
And now we see if things work. With Apache, one should always use the web servers configtest command to make certain things are configured in a clean manner:
apachectl configtest
So… what’s apachectl? That’s a command line tool that one uses to control the web server (which is an Apache thing, and so works on any of the UNIX-based operating systems). If the config test returns ‘Syntax OK’, then you are ready to rock the web.
Now for macOS command-line magic… you need to tell macOS to start Apache at system startup:
sudo launchctl load -w /System/Library/LaunchDaemons org.apache.httpd.plist
If you want to get things running in the meantime, do a:
sudo apachectl graceful
At this point, navigate to http://localhost and http://localhost/~<your short user name> and see if you get the expected responses from the web server.
Reference: https://discussions.apple.com/docs/DOC-250004361
Next up: Getting PHP to work using Homebrew.
]]>INGREDIENTS
1 pound Yukon Gold potatoes, not peeled, sliced into ¼-inch rounds
2 tablespoons plus ¼ cup extra-virgin olive oil, divided
Kosher salt and ground black pepper
1 pound ground lamb or 80 percent lean ground beef
1 medium yellow onion, halved and grated on the large holes of a box grater
1/2 cup finely chopped fresh flat-leaf parsley
1/2 teaspoon ground allspice
1/2 teaspoon ground cinnamon
14 ½ ounce can crushed tomatoes
2 medium garlic cloves, minced
1 pound plum tomatoes, cored and sliced into ¼-inch rounds
1 small green bell pepper or Anaheim chili, stemmed, seeded and sliced into thin rings
DIRECTIONS
Heat the oven to 450°F with a rack in the middle position. On a rimmed baking sheet, toss the potatoes with 1 tablespoon of oil and ¼ teaspoon salt. Distribute in a single layer and roast without stirring just until a skewer inserted into the potatoes meets no resistance, 10 to 13 minutes. Remove from the oven and set aside to cool slightly. Leave the oven on.
While the potatoes cook, line a second baking sheet with kitchen parchment. In a medium bowl, combine the lamb, onion, parsley, allspice, cinnamon, ¾ teaspoon salt and ¼ teaspoon pepper. Using your hands, mix gently until just combined; do not overmix. Divide the mixture into about 20 golf ball-size portions (1½ to 1¾ inches in diameter) and place on the prepared baking sheet. Flatten each ball into a patty about 2½ inches wide and ¼ inch thick (it’s fine the patties are not perfectly round); set aside until ready to assemble.
In a 9-by-13-inch baking dish, combine the crushed tomatoes, garlic, the ¼ cup oil, ½ teaspoon salt and ¼ teaspoon pepper. Stir well, then distribute in an even layer. Shingle the potatoes, tomato slices, green pepper rings and meat patties in 3 or 4 rows down the length of the baking dish, alternating the ingredients. Drizzle with the remaining 1 tablespoon oil and sprinkle with pepper.
Bake, uncovered, until the kafta and potatoes are browned and the juices are bubbling, 25 to 35 minutes. Cool for about 10 minutes before serving.
]]>All right, I have to keep looking up the instructions for “burning” an ISO to removable storage so often that I just figure post them here and see if Google (or preferably, Duck Duck Go) will find them for me the next time I have to do this.
The steps:
hdiutil
:
hdituil convert –format UDRW -o ~/path/to/dest.img" ~/path/to/target.iso
diskutil
to find out where the removable device has been mounted into the file system.
diskutil list
diskutil
.
diskutil unmountDIsk /dev/diskN
dd
to copy the disk image to the raw device. Note you’ll need admin access for this and do remember that dd
is destructutive. THINK before hitting that enter key:
sudo dd if=~/path/to/dest.img of=/dev/rdiskN bs=1m
diskutil eject /dev/diskN
Hopefully useful… Selah.
]]>For instance, there is a command-line tool that you can use to run software updates: software update
.
Consider:
softwareupdate -l
This gets you a list of available software just like what happens in System Preferences when you launch the Software Update preference pane. In both cases, the utilities are talking the softwareupdate daemon in the operating system.
Next up, getting stuff installed:
softwareupdate -I NAME
softwareupdate --install name
You replace NAME with one of the items from the list you asked for in the first step. Be careful there as macOS is very sensitive about names and format of names. You should quote the names and watch out for cases where the name has trailing spaces.
The “-d” option will just download an update while including the “-a” option will install all available updates. One of the useful options for this command-line tool is “–install-rosetta”. This option tells macOS on Apple silicon Macs to install the Rosetta 2 hypervisor/emulator for Intel macOS applications. Include the –agree-to-license” option to agree to the software license agreement without user interaction.
Selah.
]]>But with a little effort we find the macOS diskutil
utility. This is also where we begin to see some the FreeBSD heritage in macOS as this is follows the FreeBSD “noun verb” UX for commands where you enter diskutil
followed by a number of verbs that do the work.
Let’s start with the list
verb. Issuing the command:
diskutil list
Gets you a listing of currently mounted disks, partitions, and mount points. This is fun as you get a lot more detail about the internals of how APFS is blatting stuff through your disk. For instance, on my machine I have /dev/disk0
as the physical disk with an APFS container disk in a partition. That logic disk is mounted as /dev/disk3
with the multiple volumes in the container. This is far more detailed information than what you get from the user interface.
The info
verb gets you the details for a specific disk. Again, lots of detail but good for troubleshooting. The umount
and umountDisk
verbs are for un-mounting partitions and disks out of a file system while mount
goes the other way. It’s important to understand that the eject
verb is for removable devices and is the same action as ejecting a drive in Finder.
All of the formatting and partition things you do in the GUI have corresponding verbs in the command-line tool. Do be careful as with great power comes with great responsibility. Think twice and then again before hitting that enter key.
The jobby-job thing that I do to pay the bills requires me to do a bunch of system administration things. So, I’m often needing to “burn” a Linux installer to a USB key. Here we can use a combination of the diskutil
and hdiutil
command-line tools to automate that process.
First, use the list verb to find the mount point for the USB device that’s your target device:
diskutil list
Now we can write a Bash script that will do the heavy lifting for us:
ISONAME=$1
DESTDISK=$2
TMPIMG=“~/tmp/copytarget.img”
hdiutil convert UDRW -o $TMPIMG $DESTDISK
diskutil unmountDisk /dev/$DESTDISK
dd if=$TMPIMG of=/dev/r$(DESTDISK) bs=1m
diskutil eject $DESTDISK
rm $TMPIMG
So, our little script takes two parameters: the filename of the Linux ISO and the base name of the disk mount point. This script first uses hdiutil
to convert the ISO into a macOS disk image. I keep a temp folder in my home folder for these sort of things. We then unmount the external device and use the classic UNIX dd
command to do a byte-by-byte copy to raw version of the mount point. After doing this, you need to eject the device.
And that’s a quick summary of diskutil
.
Selah.
]]>This isn’t going to be series talking about how to do stuff in Bash or zsh using the Terminal application. Lots of introductions out on the Intertubes about Shell programming and how one can use it to automate for fun and profit. This series is going to point out things that about using the command line in macOS that you don’t often hear about.
Like what things? Many of the tools you use to run macOS have lesser known command line interfaces. Classic examples are apps like Disk Utility and Software Update. Both have command line interfaces that allow you to do stuff in single commands that take multiple clicks in the GUI. That’s what we’re going to examine in this series of posts.
But there are some things available to you in the Terminal app and with Bash and/or zsh that you can use to make your life easier. Drag and drop is supported by Terminal… dragging a folder icon from Finder to the Terminal’s Dock icon opens a new window in the app and changes the current working folder to that that folder. Dragging files onto a Terminal window inserts their paths separated by spaces.
A really useful thing is the open command. Issuing the command by itself in a shell will open the current working folder in a Finder window. You can specify a file name as an option:
open ~/Library/Preferences open ../.. open /etc
Provides a quick way to get to hidden folders when you need to do something “admin”-like on your machine.
You can open a specific file, which will use the current association for that file type to open the file, or use the -a option to specify the app to use to open the file. You have the -e or -t options to open a file using TextEdit or your favorite editor.
Really useful is the -f option which allows you to pipe text into the open command. This allows you to use open within shell pipelines, up to and including getting output from a command into a text editor.
Lots of things that you can do here and Terminal’s linkages into Finder and the open command give you way to link the things you do in the Terminal with the windowing system and vice-versa. So go grab a good tutorial in the use of zsh and enjoy!
Selah.
]]>