# In the Wrong Job…?

Stuff…

Rewriting… longtime readers of this blog will know that for a long time I’ve thought we could do more in the way of creating written diagrams to help make courses more maintainable. Part of this comes from the ability to make minor edits and reflow a diagram, part of it comes from developing, over time, a set of reusable patterns and building on top of (or iterating around) what you’ve already done. Working out which bits of a diagram to parameterise in order to come up with parameterisable or programmable diagram generators (think things like the Blockdiag diagrams) is something that is likely to develop over time and provide accelerating returns if you need to generate diagrams of a similar type in future.

Updates to TM351 include a couple of diagram types. Entity relation diagrams of various kinds and some database transaction diagrams.

I thought there would be easy ways to do this, but not where I looked, so here are some fragments using Tikz (LaTeX)… Which is to say, old tech

Generate arrows of the form:

Not all of these make sense, but that might be useful when creating nonsensical examples.

Here’s an example diagram (not necessarily a meaningful one):

LaTeX code in this gist.

(I’m finding latex4technics  a handy online editor for previewing TikZ diagrams.)

With a few more hours, we could pinch from things like https://tex.stackexchange.com/a/133849/151162 or nicer https://tex.stackexchange.com/a/195694/151162 cf. https://tex.stackexchange.com/a/367337/151162, and generate a thing for doing nice (maintainable) ERDs from text. (The next thing in automation would then be to find a way to automatically layout tables. Graphviz does provide a way round things like this – and there are ERD customisations for working with it (eg BurntSushi/erd or laowantong/mocodo, but the orthogonal projection seems to have an issue when it comes to handling directed edge directions?)

Several transaction diagrams were also provided as sketches and here’s a first attempt at trying to recreate one of them:

\documentclass{standalone}
\usepackage{tikz}
\usetikzlibrary{arrows}

%https://tex.stackexchange.com/a/126310/151162
%This makes sure the arrowhead on the bendy line points the right way...
\makeatletter
\def\pgf@plot@curveto@handler@finish{%
\ifpgf@plot@started%
\pgfpathcurvebetweentimecontinue{0}{0.995}{\pgf@plot@curveto@first}{\pgf@plot@curveto@first@support}{\pgf@plot@curveto@second}{\pgf@plot@curveto@second}%
\fi%
}
\makeatother

\begin{document}

\begin{tikzpicture}

\def\Wa{71.6}
\def\Wb{75}

%Create some vars to allow alignment
\def\dashboxx{1.5}
\def\dashboxwidth{5.5}
\def\dashboxheight{2}

\node (origin){};

\draw (0,0) node[anchor=east] {Tamblin} -- ++(10,0) ;
\draw (0,2) node[anchor=east, align=center] {Paxton /\\ Thornton}  -- (10,2);
\draw (0,4) node[anchor=east] {Gibson} -- (10,4);

\draw[dashed](\dashboxx,-1) rectangle ++(\dashboxwidth,\dashboxheight);
\draw[dashed](\dashboxx,3) rectangle ++(\dashboxwidth,\dashboxheight);

\draw[arrows={-angle 60}] (\dashboxx-0.5,2) node[anchor=south east] {\Wa} -- (\dashboxx+0.5,4)  node[anchor=south] {\Wa} ;

\draw[arrows={-angle 60}] (1.2,2) -- (2.2,0)  node[anchor=north] {\Wa} ;
%alternative arrows: eg [-latex,thin]
\draw [arrows={-angle 60}] plot [smooth] coordinates { (1.3,2) (2,1.5) (5,1.5) (6.5,0)} node[anchor=north] {\Wb};

\draw[dashed](2.1,1.9) rectangle ++(3,0.6);
\draw[arrows={-angle 60}] (3,2.3) node[anchor=east] {\Wa} -- (4.5,2.3) node[anchor=west] {\Wb};

\draw[arrows={-angle 60}] (5.5,2) -- (6.5,4)  node[anchor=south] {\Wb} ;

\end{tikzpicture}
\end{document}


Here’s how it renders (no, me neither…):

So… the above were generated using code. And everyone should learn to code, apparently. But from what I can tell, everyone (academic-wise) tends to either handover a sketch for an artist to draw (the above diagrams can be rendered as PDF, eps, or SVG (which can in turn be converted to PNG etc), although I’m not sure how nice the SVG is, e.g. when it comes to importing it into a drawing package?), or try to draw the diagram in Word or Excel. (Also based on experience, people who teach code often don’t seem to tend to think in terms of creating or using code, albeit even the scruffy code that many of us write, in the everyday, to actually help get stuff done…)

And when it comes to recruitment, we recruit yet more academics to academic posts, with complete disdain and disinterest for practical skills (not right for an ‘academic’ job role). But then, “university” not “polytechnic”, I guess.

We are so missing out on making contributions around how to teach innovatively using emerging current tech and develop new teaching strategies with it. Which requires the confidence and ability to use it, and explore ways in which it can be used.

Instead, the best we can hope for is finding a way of co-opting something bought off-the-shelf and making do with it as best we can at the user level, forgetting that much of the off-the-shelf stuff may have been recently developed by small start-ups with little or no academic “technology enhanced learning” expertise. (And that there is a long of stuff to be learned at the deploying new tech in ways and new ocmbinations level.) But maybe we add value by showing how to take of-the-shelf productised tech and demonstrate how to “use it properly”.

(If we’re competing with other institutions to by the same tech, what additional do we bring? How to “use it properly” in a distance education setting? Maybe I’m being churlish. Maybe there is real value  in us doing that.

When you work at the UI layer, you working at the same level as every other muppet.

It’s like a weird inverse of the not-invented-here syndrome: we’re safer buying something in because we have no expertise or capacity to develop it internally. And we’re not interested in trying to develop capacity. (Compare that to when I joined the OU: it was a leading developer of educational software; but capacity and in-house expertise has cut and lost year-on-year for years now…)

The thing about code is that it lets you build your own tools. The thing about code is that it builds up abstraction levels that lets you combine things are each level. The thing about code is that unlike Lego, where you start with big chunky Duplo blocks, move to standard Lego then Lego Technic, code starts small and fiddly, and then builds layers on top: C, Python, pandas, pytorch|Tensorflow, Docker container running Jupyter notebooks|pythorch|tensorflow, Docker container running Jupyter notebooks|pythorch|tensorflow docker composed with a database, or notebook extended to run jobs on a remote cluster.

But then, academics are academics first, not technologists first, or engineers first. The ability to do magic in the real world is of no interest.

I am so fed up.

## Author: Tony Hirst

I'm a lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...

## 7 thoughts on “In the Wrong Job…?”

1. Well said – especially the last three lines. Bravo. I’m with you.

2. Long ago, I used to find great joy in using Visio. Besides doing practical job-related diagramming, I also used it at home to map out the tangle that is my home Audio/Visual system. While Visio was fun to use, the code examples above are not “fun” and accessible except to a small minority.

It would be cool if Visio let you “see” the code side-by-side with the rendering, much like web html editor pages that have both rendered and raw html windows, allowing you to round-trip in either direction. The Vega editor for example, only goes from code to visualization https://vega.github.io/editor/#/examples/vega/tree-layout. It would be cool to have a Visio-like templates to drop into the visualization and see the companion code. That would allow folks to quickly experiment with the specs.

3. @Ken I totally agree. “View Source” made the original web. I also refer you to the notion of abstraction layers. We have to start somewhere, but each iteration can help abstract items out and move things up a level in the abstraction camp. Eg:

first time round, raw code (eg raw lines of code);
second, generalise it and parameterise it a bit (functions);
third, move up to a more general abstraction (maybe a package with higher level methods);
fourth, create a bit of magic to make it easier.

The above post is an example of a couple of hours of poking around trying to learn bits of Latex. But to draw the next diagram will be easier, and if I need to tweak a diagram that’s easier too. I can also get different output document formats out (SVG, eps, etc). A couple of hours spent drawing a Word diagram is time lost.

As for things like Visio: maybe you find it fun. I often find gui driven packages frustrating because the UI developer has decided which functions or arguments to expose, and I’m expected to use the tool as they designed it to be used and for the purposes for which they intended it to be used. Which may not be how I actually want to use it or what I want to use it for…

I’m also deliberating taking a stance of “text formats are good”. Firstly, because you can then in principle have one authoring / editing environment (text based) and a range of custom renderers on the back to handle rendering, previewing, etc. Secondly, because I know that pisses people off who think that clicking toolbar buttons is the best way of styling a document…

1. @Tony – I too am totally in agreement with you. Event with libraries, I feat that they might prevent me from doing something I have in mind. I went through the exact same 4-step process you described, to develop P1Analysis charts (e.g., http://p1analysis.com/2018-fiawec/index.html). Hack, refactor, repeat. These are then reusable libraries used in my chart generator. Tweak, tweak, tweak!

Keep up the posts!

1. @Ken I need to take a day out to properly go through your screen shots (and try to recreate elements of them). I’ve been tinkering w/ rally data (latest iteration: http://wrcaustralia2018.rallydatajunkie.com ) but I have a massive backlog of half baked code, particularly around telemetry but also around data2text (I’m trying a new approach using pytracery grammars and pandas dataframes) and then looking at some simple automated analysis / commentary generation (which requires identifying event features and interestingness and then either communicating them as text or maybe highlighting points on charts with labels describing related event; but that’s all way in the future mind atm… First needs to be the basic telemetry reporting and pytracery mechanic.)

1. I love the automated commentary idea. I’ve thought about that as well, but like you, there’s so much else to do :)

This site uses Akismet to reduce spam. Learn how your comment data is processed.