Following up on the previous post, which demonstrated using ChatGPT (currently free research preview), I wondered whether we could use the approach to generate an educational diagram taken from a current OU module from its accessible description, which would support an “accessible description first” approach in diagram generation.
For example, consider the following flowchart, lifted without permission from the OU module TM111:
Can we regenerate something like from the accessible long description given of it?
Produce a mermaid.js script to visuliase the following flow chart diagram: This is a flow chart consisting of a series of instructions, state-descriptions or questions arranged in boxes, one under the other and designed to be read from the top downwards and joined by arrows to the next instruction. The first instruction is ‘start’, and then an arrow goes down to ‘station is ready to send’, then to ‘listen to channel’. From here an arrow goes down to a diamond-shaped decision box containing the question ‘Is channel free (carrier sense)?’ From this box there are two arrows depending on whether the answer is ‘no’ or ‘yes’. The ‘no’ arrow goes back up to the preceding instruction, ‘Listen to channel’. The ‘yes’ arrow goes down to the instruction ‘send data and observe channel’. From here an arrow goes down to another diamond-shaped decision box from which ‘no’ and ‘yes’ arrows emerge. The decision box contains ‘Collision?’ The ‘no’ arrow from here goes back up to the preceding instruction, ‘send data and observe channel’. The ‘yes’ arrow goes down to the final instruction ‘wait for random backoff time interval’. The output from this event goes back up to the third box, ‘listen to channel’.
Me to GPT3, cribbing a longdesc from the OU module TM111
To generate a flow chart diagram using mermaid.js that visualizes the process you described, you can use the following script:
graph TD
A[Start] --> B[Station is ready to send]
B --> C[Listen to channel]
C --> D{Is channel free (carrier sense)?}
D -->|No| C
D -->|Yes| E[Send data and observe channel]
E --> F{Collision?}
F -->|No| E
F -->|Yes| G[Wait for random backoff time interval]
G --> C
This script generates a flow chart with the following steps: 1. Start 2. The station is ready to send. 3. Listen to the channel. 4. Check if the channel is free (carrier sense). - If the channel is not free, go back to step 3. - If the channel is free, proceed to the next step. 5. Send data and observe the channel. 6. Check if there is a collision. - If there is no collision, go back to step 5. - If there is a collision, proceed to the next step. 7. Wait for a random backoff time interval. 8. Go back to step 3. To use this script, you will need to include the mermaid.js library in your HTML file and then use the mermaid.init() function to render the diagram. Here is an example:
<html>
<head>
<script src="https://unpkg.com/mermaid@8.7.0/dist/mermaid.min.js"></script>
<script>
window.onload = function() {
mermaid.init();
}
</script>
</head>
<body>
<div class="mermaid">
graph TD
A[Start] --> B[Station is ready to send]
B --> C[Listen to channel]
C --> D{Is channel free (carrier sense)?}
D -->|No| C
D -->|Yes| E[Send data and observe channel]
E --> F{Collision?}
F -->|No| E
F -->|Yes| G[Wait for random backoff time interval]
G --> C
</div>
</body>
</html>
This script will generate a flow chart diagram that visualizes the process you described. You can customize the diagram by modifying the mermaid script and by using the various options and functions provided by the mermaid.js library.
The generated mermaid.js
code actually contains a syntax error (at least for the version of the mermaid.js
package that the HTML template code loads) — the decision box label cannot contain simple brackets — but with that fixed we get the following flowchart:
Here are the two flow charts, side-by-side, offered without further comment:
The differences are… illuminating… One issue may be that ChatGPT incorrectly generates a mermaid.js description from the text. Another may be that the chat description is actually ambiguous: do we do an internal round-trip test whereby an artist is asked to sketch a version of diagram from the accessible text just to check the the description does effectively communicate whatever model the author intends it to communicate. If the diagrams differ, then is the description wrong, or is the actual diagram wrong, or at least, ambiguous in the way it communicates the message as “intended” by the long description? I assume that ChatGPT is, if anything, overly literal in the way in interprets a text. So might we use ChatGPT as a quality tool to help check our long descriptions?
One thought on “Can We use ChatGPT to Render Diagrams From Accessible Diagram Descriptions”
Comments are closed.