Via an O’Reilly Radar / Four short links post (via my RSS reader, obvs…), I learn about the Smithsonian Open Access site (and from that I remember I used to love the whole GLAM / open api thang. Why did I ever stop playing around with that stuff?)
One area of the site provides a view over datasets (lots of weather/meteorology data?!), another access to 3D models (though no models of skeleton clocks that I could see, as yet?!).
The 3D model viewer — Voyager — is open source (smithsonian/dpo-voyager) and available as a standalone or embedded web component.
There’s also a tool and workflow for creating a “story” around a 3D model that lets you:
- set the pose of the object;
- capture a 2D rendering of the object;
- tweak background settings;
- annotate the model in 3D space;
- associate an HTML article with an object so it can be displayed alongside the object in an intergrated view;
- create an interactive tour that provides “an animated walk through a Voyager scene [consisting of] a number of steps”.
The document JSON based SVX format used by the Smithsonian Voyager “resembles glTF, the standard for serving 3D scenes on the web”.
This might be really interesting thing to explore in the context of refreshing some OpenLearn materials?
PS by the by, following through on some of the glTF stuff, I come across this gallery of glTF models — Sketchfab — and some models from the University of Exeter: exeterdigitalhumanities. Good to see an HEI getting their warez into public spaces…
That is great. We did something v. early in U116 with a virtual museum cabinet and 360 photos. Lovely stuff at the time. Would be good to update it with this sort of tool
Happy to bounce around ideas about this… I also think there’s scope for refreshing OpenLearn materials with this sort of thing. I’m slowly pulling tooling together to make direct, rich authoring possible from OpenLearn OU-XML src docs, eg https://blog.ouseful.info/2020/02/28/fragment-hard-to-use-openlearn-ou-xml-to-markdown-tool-if-you-fancy-trying-it/