“Making History: Applications of Digitization and Materialization Projects in Repositories” – Chapter 5

Chapter 5: Digitization and Materialization

In this chapter, I will discuss the process of digitizing and materializing the meat grinder. It is not my intent to prescribe a particular technical method for this process. The hardware and software I have chosen (discussed in more detail in Appendix B) are a means to an end and the passage of a year or two can alter the tools available. However, technological issues were factors in my decision making process, and my interaction with technology is properly considered as part of my analysis. It is for that reason that this chapter takes the form of a reflexive project narrative.

Digitization

Photography

I spent a portion of one day photographing the meat grinder. My primary goal was to create a set of photographs suitable for building a 3D model. My secondary goal was to compare the usefulness of the different photo sets. My third goal was to analyze how the process affected my engagement with the meat grinder. In short, I wanted to assess the feasibility of such a project and speculate upon its value as an intellectual exercise.

I shot using three devices in two locations. The technical specifications of my smart phone, Nikon, and iPad are discussed in Appendix B, but suffice to say that they are unexceptional digital photography options. The locations were Germantown Historical Society spaces which the staff was kind enough to offer for my purposes. They were not optimized for artifact photography. The ad hoc nature of the locations was, from my perspective, a benefit. I have no doubt that a professional photographer, with high-end equipment and control over lighting and other environmental factors, could produce a fantastic set of images. I wanted to see if I, working in real world conditions and using the equipment I carry in my pocket, could produce an adequate set of images.

In order to build a 3D model, I needed a set of photographs documenting the meat grinder from all angles. (Except the bottom—a relative term defined with an eye toward the technical requirements of printing.) Online resources and publications such as MAKE offer instructions and advice for photographers. The object should remain stationary, with the photographer circling to capture a series of angles. Deep shadows and highly reflective surfaces are to be avoided, and natural lighting is preferred. Software stitches together the photographs by finding common reference points; the basic model can then be modified manually to compensate for less-than-perfect rendering. Technical and compositional expertise are not required.

My first location was the break room. I visited on a sunny day, and even with the blinds closed as tightly as possible the light from the window was bright. The table was oval. Had it been round, I would have had a convenient tool for gauging distance to the object, but instead I simply tended to lean in over the longer ends of the table. The tablecloth featured an orange paisley and flower pattern. I was offered a sheet of white paper as background, which I used for one set of photographs before deciding it was unnecessary.

I placed a fabric cutting mat under the meat grinder. Depending upon the angle of photographs, this could provided a largely uniform background of green, contrasting well with the meat grinder. More importantly, it provided a grid. Photographers who wish to create 3D models are advised to create reference points—distinctive backdrops, newspapers, and sticky notes are variously suggested—and the cutting mat served the same function.1 (See Photograph 3.)

Photograph 3. The Germantown Historical Society break room. Except where noted, all photographs of the meat grinder were taken in this location using the Nikon DSLR.
Photograph 3. The Germantown Historical Society break room. Except where noted, all photographs of the meat grinder were taken in this location using the Nikon DSLR.

The second location was in the basement. The meat grinder was illuminated by artificial light from above. Once more, I used the cutting mat to provide reference points. The table upon which it sat was not clear of other artifacts, so from some angles I took blind shots. In this case, the iPad seemed a superior choice: I reached around to brace it on the table rather than hold it up to aim at the meat grinder, as I did with the LG and Nikon. I felt fairly confident that some of the break room photographs would work, so I did not make any effort to bring in an uncluttered surface on which to photograph the meat grinder. (See Photograph 4.)

I took photographs with all three devices and took advantage of preview functions to broadly gauge the quality of my images. I sought to minimize reflection and deep shadows (a particular concern given the backlighting in the break room). In general, this meant eschewing the flash. (See Photograph 5.)

Digital photography means the expense of each shot is negligible. The only issue is storage capacity, and I had adequate space on each device, to say nothing of the storage available on my laptop. Because of this, experimentation was not only possible but encouraged. There was only the most marginal of costs—time, on a day I had already planned to spend at GHS—to taking additional sets of photographs from different angles or using different settings. The inexpensive nature of digital photography also encouraged a certain lack of care. I attempted to keep the camera about the same distance away from the meat grinder for all shots, but did not go so far as to measure the distance, much less employ a tripod. I did not trouble myself with research about, for instance, the Nikon’s range of settings. I could use “flower” for one set of photos and “P” for another, compare the previews, and see which set of images the modeling program preferred. Feedback was nearly instantaneous. Though I was not able to tell which set of photos would be most useful, a look at the previewed image allowed me to see which shots were poor quality. I had the option to delete images that were over- or underexposed, out of focus or cropped a portion of the meat grinder; or I could leave the pressing of the delete key until a later stage of the process. With greater constraints on the number of pictures or amount of time, I would have been encouraged to plan more carefully in advance.

Photograph 4. The Germantown Historical Society basement. Photograph taken with the LG smartphone.
Photograph 4. The Germantown Historical Society basement. Photograph taken with the LG smartphone.
Photograph 5. Photograph taken using flash, showing bright light from window and shadows on meat grinder.
Photograph 5. Photograph taken using flash, showing bright light from window and shadows on meat grinder.

The Nikon offered the greatest degree of control over images. I experimented with exposure settings and the use of the flash, though I tried only a few of the permutations available. An early round of images used the close up point-and-shoot mode (the flower icon); later I switched to programmed auto exposure mode (P). I did not use a Speedlight flash unit or attempt any reflective tricks, but instead only used the built-in flash. In my initial attempts to photograph the meat grinder, I was more or less on its level, which meant a number of shots had the window framing a significant portion of the artifact. Photographs taken at that angle benefited from the flash; the meat grinder was otherwise deeply shadowed. For my second full set of photographs, I aimed downward. This kept the mat as the immediate background of all parts of the meat grinder from all angles. (See Photograph 6.)

Photograph 6. Photograph taken aiming down to use the cutting mat as a backdrop.
Photograph 6. Photograph taken aiming down to use the cutting mat as a backdrop.

When taking photographs with the LG, I continued to aim more or less downward. I did not attempt to insure that the mat was the exclusive background. In a number of pictures, the tablecloth pattern peeks through the screw’s keyhole. (See Photograph 7.) I did not use the LG’s flash (and in fact, almost never use it: in my experience with the smartphone, the risk of overexposure is much greater than the risk of underexposure, to say nothing of the distracting nature of the very bright LED). At the end of my “formal” photo shoot, I took several miscellaneous shots, including close ups, details, and different angles. In general, I felt less inhibition when taking pictures with the LG. I did not feel as though I was somehow violating the cohesion of my photo set by taking additional images. Despite the fact that Nikon pictures had the same cost as LG pictures—functionally nil—the Nikon strikes me as heftier. There is a large lens and user-adjustable settings that go well beyond “swap camera,” zoom, and the rest of the LG’s repertoire. The Nikon is literally weightier than my smartphone, and has the physical silhouette I associate with a “camera.” Pictures taken on a phone automatically feel less formal, more disposable, and perhaps more appropriate for experimentation. My expectations are also lower. Despite the fact that I rarely use the LG to place a phone call, I still think of the device as intended primarily for telephonic and data communication. Its camera and video functions are a frequently-used bonus.2

Photograph 7. Photograph taken aiming down but with the tablecloth visible as a backdrop. Photograph taken with the LG smartphone.
Photograph 7. Photograph taken aiming down but with the tablecloth visible as a backdrop. Photograph taken with the LG smartphone.

I primarily use the iPad for streaming video content. I find aspects of the interface annoying, and have not had the occasion to use it extensively enough to overcome my initial frustrations. Physically, it is even less obviously a “camera” than the LG. Despite my comparative lack of experience with it, the iPad did feel like it had some advantages. Though it was the most unwieldy device when held aloft, I was able to rest it on the table and slide it around the meat grinder in a circular pattern. As a result, I felt as though I had more control over its position. The table provided some control over the z axis, and the sliding circle may also have compensated for the oval shape of the break room table and certainly made it easier to navigate the cluttered basement table. This control was partially an illusion, since I was still holding the device with my hand rather than a tripod, but having the table as an intermediary made me feel more secure, or at least less conscious of microshaking. Even if digital cameras can compensate for such motions, functioning as a sort of digital tripod, I am still conscious of their existence. (See Photograph 8.)

Ultimately, I chose to use a set of photos taken with the LG, as the initial model seemed to require less manual correction. One of the Nikon photo sets produced results that were nearly as good. I was therefore satisfied that, with minimal experience and relatively inexpensive, unspecialized equipment, a repository could undertake this kind of project.

Photograph 8. Meat grinder in the Germantown Historical Society basement. Photograph taken with the iPad.
Photograph 8. Meat grinder in the Germantown Historical Society basement. Photograph taken with the iPad.

The positioning of the meat grinder had technical and material culture implications. Printed objects require a flat base. The meat grinder lacks this feature: it was designed to be affixed to a counter using its screw. In a certain sense, the meat grinder is not quite a complete object without that counter. As I speculated on the possibility of materializing an absent meat grinder in Chapter 3, it is equally appropriate to consider the absent counter. The screw is a reminder of context lost in the transition from the original environment and into the repository. On a practical level, the meat grinder must stand (or lie) alone to be photographed, and exact replication of some surface area must be sacrificed for the printer’s requirements. I chose to place the meat grinder upside down, standing on the hopper, with the handle resting on the mat. This provided stability and minimal loss of external details. The interior, functional components were not going to be replicated in any case. This model was to fall firmly into the first approach (as described in Chapter 3) and treat the meat grinder as a static object. It is photographed and digitized as though it is a sculpture, so that it may be materialized as a sculpture.

The process of placement led to additional observations, retrospectively obvious things which I failed to note during my first analysis. Paper appears to be lodged in the mechanical workings of the meat grinder. (See Photograph 9.) The repository wrote the accession number on the object in ink. (See Photograph 10.) I honestly cannot remember if I failed to notice this when I first inspected the meat grinder, or whether this sort of curatorial notation has become transparent to me, after encountering and making a number of lightly-penciled notes on archived manuscripts.

These observations were encouraged by the digital photography session. After taking the photographs needed to build a model, I wished to take some from additional angles. The presence of the camera, and the aforementioned negligible additional expense (in terms of time, effort, and money), was an invitation to experiment. One piece of technology facilitated interaction with another object.

Photograph 9. Meat grinder, paper visible inside.
Photograph 9. Meat grinder, paper visible inside.
Photograph 10. Meat grinder, accession number visible.
Photograph 10. Meat grinder, accession number visible.

Though I had already decided that I would adopt my first approach, and treat the meat grinder as a static object, I made small forays into approaching it as a mechanical object. I photographed the meat grinder from the angle I designated as the base, capturing an image of a portion of the auger. (See Photograph 11.) With curatorial permission, I removed the screw from the meat grinder. (See Photograph 12.) The artifact is in good condition—despite visible rust, the screw and handle function—so this partial disassembly was not particularly risky. Nor does it rise to the level of complete disassembly and reverse engineering suggested in my third approach in Chapter 3 above. But the removal of the screw, and photography of two separate pieces of the artifact, nonetheless seemed an interesting exercise.

Photograph 11. Meat grinder, view from below showing auger.
Photograph 11. Meat grinder, view from below showing auger.
Photograph 12. Meat grinder without screw, paper visible inside.
Photograph 12. Meat grinder without screw, paper visible inside.

Turning the screw for removal and, later, replacement, is the only time I have interacted with the meat grinder in anything close to the way its creators intended. The action is mundane; it is only upon reflection that I can find any significance, and this is itself perhaps strange. I cannot remember the last time I encountered a keyhole screw, though I am sure I must have done. I have tightened and loosened a number of screws in my life—but almost always with the use of a screwdriver. That hand tool is, in many ways, transparent. It mediates interaction with screws, but I do not typically think of myself as using a screwdriver so much as screwing a thing in place. Finding a screwdriver is a task that directly involves the tool: establishing whether I need a flat or Phillips head, lamenting that tools never seem to find their way back into the toolbox or pegboard. Repeated use of a screwdriver, as when I installed angle brackets for hanging bookshelves, can leave me with sore, reddened palms. That physical discomfort, rather than the time expended on the manual task, sometimes prompts me to consider using a power drill. The occasional nature of my home improvement tasks and a certain wariness around power tools are, in combination, usually sufficient motivation to send me back to the manual screwdriver.3 It is often a completely unconscious decision.

The screw is a particularly interesting object from the perspective of 3D modeling and printing. Thingiverse hosts a number of printable screw patterns, some customized (like the fruit screw) and others pointedly standard.4 Screws are transparent in their own way. They are meant to hold things in place. Their absence can cause a noticeable problem, as can incorrect sizing or anchoring, but once they have been properly installed they are largely invisible. The printing of screws, and other components for DIY projects, is inherently non-sexy. It is only the process of production, and the excited discourse that often surrounds maker efforts in general and 3D printing in particular, which elevates the creation of hardware to an occasionally newsworthy hobby. Screw technology has been modestly refined since the 1920s: the aforementioned Phillips head was patented and mass produced in the 1930s and its comparative advantages discussed in Popular Science.5 The form and function of the meat grinder’s screw is nonetheless quite familiar and feels decidedly functional rather than historically significant.

When taking the photographs of the disassembled meat grinder, I was careful to place both components alongside the cutting mat’s ruler. I was somewhat surprised to find that this method of recording measurements did not feel as though I was taking measurements. Had I used a pair of calipers to measure the length of the screw—or even written down measurements taken using the mat’s ruler—those numbers would somehow seem more authentic than those derived from looking at the photograph. The reference image contained the information I could have recorded—and in fact, the photograph serves as a check against incorrectly recorded data—but consulting an image to record the data feels like an additional step and an opportunity to introduce errors. (See Photographs 13 and 14.)

Photograph 13. Meat grinder screw, alongside ruler.
Photograph 13. Meat grinder screw, alongside ruler.
Photograph 14. Meat grinder, alongside ruler.
Photograph 14. Meat grinder, alongside ruler.

My photography session was thus successful in ways beyond my initial goals. I was able to take photographs meeting the requirements for my modeling program. A variety of sets, taken with different equipment, allowed me to make broad comparisons and conclude that inexpensive and unspecialized equipment was adequate for a project of this type. I did become intellectually engaged with the meat grinder beyond the initial object analysis described in Chapter 4. But beyond that, I became engaged with other pieces of material culture: not merely the equipment I used in my photo shoot, but absent objects associated with the meat grinder and a selection of hardware and tools brought to mind by a screw. The creation of digital objects served as a useful catalyst for considering material objects.

3D Modeling

My initial aim was to use a single program to build and refine my model. As I progressed, it became clear that my software choice required supplementation (or more skill than I possessed), so in the end I used two programs: 123D Catch and Meshmixer. These both have the benefit of being free software with modest system requirements and sufficient documentation (official or user-generated) to learn quickly.

When using 123D Catch, I was at the mercy of connectivity. On one particular weekend, the online version consistently hung up without completing the creation of a project, and the few projects that were successfully created took hours to complete. The PC version was unable to launch—although it is a desktop application, it still requires a connection to Autodesk’s servers. Support forums were filled with complaints and the occasional individual pointing out that this sort of problem is par for the course when using the cloud. The latter comments are decidedly unhelpful and unwelcome when encountering such a problem, but before embarking upon a project it is wise to consider potential problems. The cloud is cheap, often convenient, and requires no institutional equipment or personnel overhead, but the advantages of outsourced technological solutions are countered by a lack of control over connectivity, specifications, security, and other factors. For the type of projects I propose—those which are neither mission-critical nor sensitive in nature—the trade off is worthwhile. The facilitator is, however, advised to have a backup plan in the case of technical problems. A session originally planned as model generation could perhaps be converted into historical research, or an investigation of the technical issues.

After that particular weekend, the cloud proved cooperative. Projects loaded quickly and the software was responsive. I generated models using various photosets and ultimately selected one created from a set of smartphone pictures, based on a guess of which model would require the least editing. I then proceeded to experiment with the software.

I tend to learn new programs by using them. I drew a bit on prior experience with such programs as Photoshop and, when necessary, consulted the video tutorials on the Web site or searched the Web for answers. 123D Catch is reasonably well documented. Meshmixer, infamously, is not. I found it a less intuitive program as well. I almost immediately ignored what official documentation existed and instead relied upon tutorials and forum responses posted by users. One could optimistically point to this as an example of the power of crowdsourcing, collaboration, and shared authority. More pessimistically, one could wonder if the lack of documentation bodes ill for support of the Meshmixer product.

A project facilitator would be advised to emphasize the former and turn a bug into a feature. Using Meshmixer encourages the user to explore others’ work, experience, and expertise. This is valuable for three reasons. Learning-by-doing has the sort of pedagogical value discussed by Ratto (see Chapter 2). The research and collaboration skills that can be gained are useful in the academy, many workplaces, and life in general. And finally, it is this sort of interdisciplinary collaboration, crossing institutional boundaries and reaching into the institutionally unaffiliated software user base, that provides a microcosm for the practice and potential of public history.

123D Catch

I did several rounds of edits in 123D Catch, occasionally abandoning one model and starting over. Later iterations undertaken with more experience were accomplished more quickly and the results more satisfying. My primary concerns were cropping unnecessary material (primarily the cutting mat and tablecloth) and filling holes in the model. (See Figure 1.) To accomplish both tasks I relied upon cues from the physical object, drawn from my human interpretation of the photographs and my independent memory of the object they depicted. Color was a valuable cue, but not infallible: around certain borders, the model melded the form and color of the meat grinder and its environment. (See Figures 2 and 3.)

Figure 1. A model of the meat grinder and its environs, generated by 123D Catch, with areas selected for deletion.
Figure 1. A model of the meat grinder and its environs, generated by 123D Catch, with areas selected for deletion.
Figure 2. Meat grinder model with most excess material deleted, but some remaining interstitial cutting mat-colored material.
Figure 2. Meat grinder model with most excess material deleted, but some remaining interstitial cutting mat-colored material.
Figure 3. Meat grinder model, with mesh visible.
Figure 3. Meat grinder model, with mesh visible.

The mesh is only editable from certain angles, and it is not always immediately apparent when you are viewing a section from the wrong side (looking through a hole into the interior of the object). (See Figure 4.) This means that rotation is necessary to do clean up work; it also means it is difficult to inadvertently punch holes in the model. Becoming accustomed to the selection mechanism was unexpectedly tricky. It is the sort of “brush” arrangement I have encountered in various other programs. It felt quick to void a selection, doing so when the center of the brush was in empty space, even if the edges of the brush highlighted part of the model. This behavior encouraged the use of smaller brushes, selecting smaller sections of mesh to put less work at risk, and more manipulation of the model’s angle and the zoom—in short, more care.

Figure 4. View of holes in the mesh, internal and external angles.
Figure 4. View of holes in the mesh, internal and external angles.

Some holes could be automatically detected and filled (“healed”). The manual fill tool is accessed by toggling the delete tool (an eraser icon), which struck me as counterintuitive. Certainly, creation and destruction are reasonable binary opposites. But I did not really think of my work as creation; I was tweaking. Substantial work generating the model had been performed using a black box—I fed in one type of input (photographs), and without seeing the mechanism at work was presented with very different output (3D model)—and I had an extant object as a reference. The word “healed,” used in 123D Catch, also implies a return to a Platonic ideal (or at least an earlier state). Deleting jagged edges, and then filling open spaces, seemed parallel, related tasks. But the toggling between the two tools emphasized that the program neither knows nor cares about the nature of the model. It is sophisticated enough to build a model based upon photographs, but not sophisticated enough to ignore an orange paisley tablecloth. Once generated, the 3D model takes on a life separate from the photographs and the object depicted. My manipulations were no different than those of a person creating a completely original object freehand, without references to photographs or a physical object.

Meshmixer

I found 123D Catch an unsatisfying solution for filling some of the larger holes in my model, so I abandoned my original plan to confine myself to a single piece of software. I proceeded to experiment with Meshmixer, with the aforementioned poor documentation and counter-intuitive interface. After consulting user-generated tutorials I rotated the object and did a smooth autofill of the remaining holes.6 In subsequent edits I experimented with resizing the object along different axes. (See Figure 5.)

Figure 5. Meshmixer model displaying model measurements along various axes.
Figure 5. Meshmixer model displaying model measurements along various axes.

I also performed some unintentional transformations along the way—this was another case where the inexpensive, iterative nature of digital work allowed experimentation and largely eliminated the frustration of lost work. I was ultimately too careless about the autofill—it was effective in several cases, but also responsible for creating (or at least exacerbating) the “goiter” discussed in my analysis of the printed object (see Chapter 6). In general, I treated Meshmixer as a set of automated functions and 123D Catch as a means of making manual changes. This was partially due to my comfort level and willingness to engage with their respective interfaces, and partially due to their positions in my workflow. Tinkering happens early, but the latter Meshmixer stages I treated as more akin to exporting files—to the model’s detriment.

Meshmixer offers several tools that analyze a model’s suitability for materialization. The stability analysis provides a preview of how the object will sit when subjected to the forces of gravity, and the strength analysis provides warnings about weak parts of the design. (See Figures 6 and 7, respectively.) Neither of these told me anything I did not know or guess, but they are still useful visualizations of the way the eventual physical object will interact with physical laws. They also serve as a reminder that the expected end point of the exercise is a printed object.

Figure 6. Stability analysis of model.
Figure 6. Stability analysis of model.
Figure 7. Strength analysis of model.
Figure 7. Strength analysis of model.

The most useful tool was the option to provide supports. These supports are added structural components which are ultimately intended to aid in the printing process and be removed thereafter. The arcing handle was, unsurprisingly, considered in need of support. (See Figure 8.) Meshmixer also includes an option to explore an optimized orientation, which in this case called for printing the meat grinder on its side, using rather a lot of supports. (See Figure 9.) From my lay perspective, this appeared less than optimal and I decided to take my chances with the original orientation. I ultimately opted to print without using the supports, but they provided another reminder that the 3D model was intended as an intermediate step in the process of creation rather than the end point.

Figure 8. Meshmixer model with proposed supports to the handle.
Figure 8. Meshmixer model with proposed supports to the handle.
Figure 9. Meshmixer’s optimized orientation and support of the model.
Figure 9. Meshmixer’s optimized orientation and support of the model.

The supports served me well as a purely digital tool. Generating supports also proved to be a quick way to confirm the alignment of the model. They were a shortcut, allowing me to work with more confidence in a program that was not terribly intuitive or familiar. In 123D Catch, I had performed a plane cut: excising material from the bottom of the model to provide a flat base. That was evidently less than effective, because Meshmixer initially supported the entire bottom of the model on tiny legs. I performed another plane cut in Meshmixer, after consulting a forum post for guidance, and this time the procedure proved effective.

Materialization

Printing

I chose to use Shapeways as a printing service. When I uploaded my life-sized 3D model, I was a little surprised to see a price of $114.65. For all my interest in material culture, the physical implications of working in three dimensions are not second nature when embarking upon what in many ways felt like an art project. Dimensions were merely numbers to the modeling software, and most of my non-virtual artistic endeavors have been firmly two dimensional (charcoal or pen and ink drawings) in which materials are an overhead rather than strongly variable cost. Small=cheap had been part of my calculus when selecting an object, but had not been such an overriding concern that I felt the need to calculate volume earlier in the process.

Instead of printing the object at full size, I opted to scale it down. I chose a height of 2″ somewhat randomly (it’s a nice, round number) and, upon uploading the new model to Shapeways, was given a quote of $7.66. After factoring in shipping costs, the price of the printed object was be roughly equivalent to the market value of the meat grinder. Had I wished to spend more time scaling to a wider variety of sizes, I could have hit the market value more precisely, but the value of that exercise is so negligible that I cannot really count it a missed opportunity.

This price was much more acceptable, especially because this initial print was a trial run. I suspected my model was printable, but was not entirely confident. I was printing it upright (as opposed to the “optimized” orientation calculated in Meshmixer) and without supports. In theory, Shapeways reviews the printability of models, but I did not discount the possibility that I would receive a semi-differentiated lump or an object so fragile that the handle might snap the first time I touched it. Even a failure would be useful for purposes of analysis, but I was hoping for a print which was successful on its face: one that materialized a 3D model without falling to pieces.

Because I used a printing service, the materialization process was neither immediate nor interactive. I suspect my reaction to the fabricated object would have been different if I had watched a printer extrude layers of plastic, and different still if I were responsible for operating or building that printer. Instead, my experience with Shapeways, like the initial uploading of the photographs, was something of a black box: I provided input and received a transformed output.


  1. See for example “Learn how to use 123d Catch,” Autodesk 123D Web site, http://www.123dapp.com/howto/catch (accessed 6 May 2014). 
  2. A clickbait review of the Moto X, written in verse, expresses my feelings about the LG: “The Moto X has a camera./The pictures look fine./Not like a big camera./But good.” John Herman, “The Amazing True Story Of The Moto X,” BuzzFeed Web site, August 1, 2013, http://www.buzzfeed.com/jwherrman/the-amazing-true-story-of-the-moto-x (accessed 9 May 2014). 
  3. I can trace my skittishness towards power tools directly to the elevator scene in Godzilla 1985, a point which is relevant because I suspect it makes me more likely to default to manual tools than might otherwise be expected, and because it illustrates the unexpected ways in which individuals can internalize the media they consume. 
  4. See for example Henrik Larsen, MScrew generator, Thingiverse, March 2, 2013, http://www.thingiverse.com/thing:56492 (accessed 15 February 2014); pek4test, Screw Test Set, Thingiverse, February 21, 2013, http://www.thingiverse.com/thing:52656 (accessed 15 February 2014); Marc Raiser, Fruit Screw, Thingiverse, August 8, 2013, http://www.thingiverse.com/thing:129874 (accessed 15 February 2014). 
  5. “Cross-shaped slots help guide screws,” Popular Science Vol. 128, No. 1 (January 1936), 38, http://books.google.com/books?id=eyYDAAAAMBAJ (accessed 3 May 2014); Timeline, Phillips Screw Company Web site, http://www.phillips-screw.com/timeline.php (accessed 3 May 2014). 
  6. Simon J. Oliver’s blog entry provided a good starting point. “Meshmixer 2.0: Best Newcomer in a Supporting Role?” Extrudable.Me, December 28, 2013, http://www.extrudable.me/2013/12/28/meshmixer-2-0-best-newcomer-in-a-supporting-role/ (accessed 11 July 2014). 
Advertisement