-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathfeed.xml
More file actions
2066 lines (1474 loc) · 200 KB
/
feed.xml
File metadata and controls
2066 lines (1474 loc) · 200 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
<title type="text">Expressions of Change</title>
<generator uri="https://github.com/mojombo/jekyll">Jekyll</generator>
<link rel="self" type="application/atom+xml" href="http://www.expressionsofchange.org/feed.xml" />
<link rel="alternate" type="text/html" href="http://www.expressionsofchange.org" />
<updated>2025-01-20T22:33:08+01:00</updated>
<id>http://www.expressionsofchange.org/</id>
<author>
<name>Klaas van Schelven</name>
<uri>http://www.expressionsofchange.org/</uri>
<email>klaas@vanschelven.com</email>
</author>
<entry>
<title type="html"><![CDATA[Homoiconicity revisited]]></title>
<link rel="alternate" type="text/html" href="http://www.expressionsofchange.org/homoiconicity-revisited/" />
<id>http://www.expressionsofchange.org/homoiconicity-revisited</id>
<published>2020-06-02T00:00:00+02:00</published>
<updated>2020-06-02T00:00:00+02:00</updated>
<author>
<name>Klaas van Schelven</name>
<uri>http://www.expressionsofchange.org</uri>
<email>klaas@vanschelven.com</email>
</author>
<content type="html"><p>In an <a href="/dont-say-homoiconic/">earlier article</a>, I concluded that you probably
shouldn’t use the word “homoiconic”: starting with the original definition of
the word, I noted that this this definition is problematic for a number of
reasons, and that the best we can say is that there is a degree to which a
language is homoiconic: languages that have a smaller conceptual distance
between their program text and machine operation are more homoiconic and
vice versa.</p>
<p>That article also explored some of the plethora of competing, often mutually
exclusive definitions in active use – a fact that should come as a warning
anyone who’s intention is to clearly communicate their ideas.</p>
<p>It is therefore somewhat disappointing (though not unsurprising) that when the
<a href="https://news.ycombinator.com/item?id=20657798">article was discussed on Hacker
News</a>, the top voted comment
started like this:</p>
<blockquote>
<p>I’m going to keep using the word ‘homoiconic’ because it is a
useful term.</p>
</blockquote>
<p>In style, that commenter neither explained why the term would be
useful, nor provided a definition of their own.</p>
<p>Given that people will continue to use the word, what are we to understand if
they do? Let’s revisit the topic, but start at the other end: instead of
taking the official definition as a starting point, let’s say a word means what
the people who use it mean by it. Even if that meaning has little to do with
the original definition, and even if they are not willing to put forward a
definition of their own.</p>
<p>If we don’t want to start with the official definition, we’re faced with a bit
of a challenge though: what are we to take as a starting point for the
discussion? To get it around it, we’ll simply start with a definition of our
own, and then match this back to typical examples and counter-examples of
homoiconicity to see how it fits.</p>
<h3 id="homoiconic-a-working-definition">Homoiconic, a working definition</h3>
<p>Homoiconic usually refers some combination of the following (weights assigned
to the bullets below will depend on the speaker):</p>
<ol>
<li>
<p>Strong language support for simple, composable (i.e. tree-like) data
structures, preferably using a minimal syntax (i.e. literals).</p>
</li>
<li>
<p>The language semantics are directly defined in terms of such data structures
and the programs are formed using instances of these structures.</p>
</li>
<li>
<p>The structure of such data-structures is explicitly reflected in their
visual representation on-screen; it is immediately apparant <em>for humans</em>.</p>
</li>
<li>
<p>High similarlity between representations of the program in (some of):</p>
<ul>
<li>the head of the programmer</li>
<li>the visual representation on-screen</li>
<li>the formal semantics of the language</li>
<li>the implementation of the (virtual) machine</li>
</ul>
</li>
</ol>
<p>The benefit of homoiconicity is then: first, the structured manipulation of
programs, possibly by other programs, becomes trivial and natural. Second,
there is little mental effort spent in mapping the program text to the program
structure, and in imagining how the machine operates on said structure.</p>
<h3 id="examples-and-counter-examples">Examples and counter-examples</h3>
<p>Lisps tick all, or most of, of the boxes:</p>
<ol>
<li>S-expressions take the role of the simple, composable data structures.</li>
<li>Lisps’ semantics are directly defined in terms of s-expressions and
lisp programs are formed using s-expressions.</li>
<li>S-expressions are trees, and the nesting of items is immediately apparent:
<code class="language-plaintext highlighter-rouge">(</code> denotes the start of a child, and <code class="language-plaintext highlighter-rouge">)</code> its end. Children are nested
inside their parents.</li>
<li>Lisp programmers think in s-expressions, which are always on-screen, and
form the basis of the semantics of the language (always) and the
implementation of the VM (if it is an interpreter).</li>
</ol>
<p>Other languages tick only some of the boxes, or none of them.</p>
<ul>
<li>
<p>JavaScript has strong language support for composable, tree-like, structures
(bullet 1) and the structure is immediately apparent by looking at the pairs
of <code class="language-plaintext highlighter-rouge">{</code> and <code class="language-plaintext highlighter-rouge">}</code> brackets (bullet 3). However, the language semantics are not
defined in such terms (bullet 2) and a JavaScript program is not provided as
a JavaScript object (or JSON object).</p>
</li>
<li>
<p>For machine code, little mapping between the representations for the human
and the machine is required, for the simple reason that no special representation
for humans exist (bullet 4). However, machine code fails on bullets 1 to 3.
I’d argue that few people that praise the benefits of homoiconicity actually
think of machine code while doing so, though they might admit machine code
fits some definition of homoiconicity when pressed.</p>
</li>
<li>
<p>XSLT is defined in XML, i.e. in composable, tree-like structures, so it gets
a pass on bullets 1, 2 &amp; 3 (with the exception of “simple syntax”). This is
another one for “homoiconic, but not a poster child”.</p>
</li>
<li>
<p>TRAC probably fails on bullets 1, 2 &amp; 3, even though it was the language in
the context of which the term homoiconic was coined. This is simply a
reflection of the fact that the term has evolved, and that modern-day usage
is different from the original meaning.</p>
</li>
<li>
<p>Java has no strong language support for literal representation of simple
composable data in the language (bullet 1) and also fails to meet the
other criteria.</p>
</li>
</ul>
<h3 id="what-lispers-dont-tell-you">What Lispers don’t tell you</h3>
<p>So is the above definition new in any way? I think it is, in the sense that is
spells out some things that might be so obvious to Lispers that they forget to
include them in their definitions. (“What’s water?” says the fish)</p>
<p>Take for example the definition on <a href="https://wiki.c2.com/?HomoiconicLanguages">Wards Wiki</a></p>
<blockquote>
<p>In a homoiconic language, the primary representation of programs is also a
data structure in a primitive type of the language itself.</p>
</blockquote>
<p>A <a href="https://wiki.c2.com/?HomoiconicExampleInJava">common objection against this definition being
meaningful</a> is that this is or
can be the case for most languages: e.g. in Java the primary representation of
programs is as text (strings), and strings are also a primitive type of the
language itself. Such an objection (understandably) glosses over the meaning
implied by “a datastructure in”.</p>
<p>For non-lispers, that phrase might not evoke much, or may perhaps suggest an
implementation detail (since “data structure” is often used to refer to the
implementation of data types, i.e. a string might be implemented as an array,
in which case the array is the underlying datastructure).</p>
<p>For Lispers, this part is essential though, because for them it is naturally a
reference to s-expressions. And thus for Lispers it evokes 2 parts of the
definition that I made explicit in the above under bullets 1 and 2, namely:</p>
<ul>
<li>the compositional nature of the data type under consideration</li>
<li>the fact that data and programs can be composed in the same way</li>
</ul>
<p>As a second example, consider the opening line of the current definition on
<a href="https://en.wikipedia.org/wiki/Homoiconicity">Wikipedia</a>:</p>
<blockquote>
<p>A language is homoiconic if a program written in it can be manipulated as
data using the language</p>
</blockquote>
<p>Here, again, the objection can be raised that this is true for all languages.
That is, all languages allow for programs written in them to be manipulated by
them “as data” (i.e. as strings). After all, strings are data.</p>
<p>Well… not for Lispers, to whom “data” will likely evoke something more
structured, such as s-expressions (again). In other words, the key property
here is that the program <em>is</em> a piece of <em>hierarchically structured</em> data
(bullets 1 &amp; 2 in the above).</p>
<p>Finally, let’s consider how Wikipedia continues:</p>
<blockquote>
<p>and thus the program’s internal representation can be inferred just by
reading the program itself.</p>
</blockquote>
<p>Here, the typical counterpoint would be that one can either never be true
(because the inner workings of the machine are hidden from us) or is always
true (if we assume that we are being provided the specification).</p>
<p>Fitting this sentence back to bullets 3 &amp; 4 makes more sense though: the key
point is not that the inner representation <em>can be inferred</em>, but rather that
a “good enough model” is easy enough to imagine while looking at the program
text.</p>
<h3 id="homo-reinterpreted">“Homo” reinterpreted</h3>
<p>If we indeed accept the working definition in the above as what most people
who speak about homoiconicity actually mean, it seems that an interesting
shift has occurred. Take one more look at the original definition:</p>
<blockquote>
<p>Because TRAC procedures and text have the same representation inside and
outside the processor, the term homo-iconic is applicable, from homo meaning
the same, and icon meaning representation.</p>
</blockquote>
<p>Now compare this with the working definition in the above: bullets 1, 2, and 3
are not at all concerned with sameness of internal and external representation.
However, they are concerned with different kinds of sameness: Mostly, the fact
that datastructures in the code, and the program text, are represented in the
same way.</p>
<h3 id="conclusions">Conclusions</h3>
<p>I’d still argue that homoiconicity is a concept that confuses more than it
clarifies, mostly because of <a href="/dont-say-homoiconic/#alternate-definitions">the many competing
definitions</a>.</p>
<p>Anyone who wants to get a point across is better off by simply referring to
more direct properties of a language such as “has a nice literal syntax for
structured data” or “the shape of the AST is immediately apparent from looking
at your screen”.</p>
<p>Still, faced with continued usage, the working definition above might at least
serve as a dictionary for the perplexed.</p>
<p><a href="http://www.expressionsofchange.org/homoiconicity-revisited/">Homoiconicity revisited</a> was originally published by Klaas van Schelven at <a href="http://www.expressionsofchange.org">Expressions of Change</a> on June 02, 2020.</p></content>
</entry>
<entry>
<title type="html"><![CDATA[Animating history: Implementation]]></title>
<link rel="alternate" type="text/html" href="http://www.expressionsofchange.org/animating-implementation/" />
<id>http://www.expressionsofchange.org/animating-implementation</id>
<published>2018-10-16T00:00:00+02:00</published>
<updated>2018-10-16T00:00:00+02:00</updated>
<author>
<name>Klaas van Schelven</name>
<uri>http://www.expressionsofchange.org</uri>
<email>klaas@vanschelven.com</email>
</author>
<content type="html"><h2 id="what-are-history-animations">What are history-animations?</h2>
<p>History-animations build on the following feature (one that was already existing):
whenever we select an s-expression in our “tree” (the structural
view on the right hand side of the window) we show the history of that
particular s-expression in the panel on the left. That is, whenever we change the cursor in
the tree, we switch what is shown in the history.</p>
<p>The animation under discussion: any part of history that shows up both before
and after this switch will “float” from its pre-switch position to its
post-switch position in a number of steps. The idea is to make it visually more
clear that there is a relationship between the histories at different levels of
the tree.</p>
<p>More details and examples can be found in a <a href="/animating-history-transitions/">separate
article</a>. In the present article we’ll zoom in
on the implementation.</p>
<p>The current version of the editor supports 2 versions of rendering histories:
one in which the histories are rendered as s-expressions themselves, as
presented in the paper <a href="/assets/papers/clef-design.pdf">the paper “Clef
Design”</a>; one in which the effects of each note
are shown in the context of the structure on which it is played (i.e.: more
like a traditional rendering of a <em>diff</em>). Animations of transitions are
implemented for both of these; where the implementations diverge this will be
pointed out in the below.</p>
<h2 id="identity-of-notes-and-textures">Identity of notes and textures</h2>
<p>The key idea in the animations is to float textures from some pre-switch to a
post-switch location. This hinges on the assumption that we have a shared
identity for the textures pre- and post-switch. E.g. to float some open-bracket
from one location to the next, we need to know which open-bracket we’re talking
about (there are many, and they look very similar).</p>
<p>Note that the particular animation under consideration is the following: when
swichting which part of our structural view (the “tree”) is selected, update
the historical view.</p>
<p>Thus, the assumption of shared identity, in this case, is: there is overlap
between the histories of different parts of our tree. For each of the elements
(notes) of the history we can establish an identity, and when viewing a
different history, we can establish whether any two notes across these two
histories are the same one, i.e. share this identity.</p>
<p>The fact that parts of histories are shared across different parts of our
structure is detailed in <a href="/assets/papers/clef-design.pdf">the paper “Clef
Design”</a></p>
<p>In terms of the implementation, the solution is to have some addressing scheme
for the textures that is global in the sense that it is shared between the
pre-and post-switch environments. Using this addressing scheme we can identify
textures: same address means same texture.</p>
<p>Such an addressing scheme for textures is obtained in a number of steps.</p>
<h3 id="noteaddress">NoteAddress</h3>
<p>The first step is to annotate each note in the “global history” (the history of
the whole tree) in such a way that we can uniquely identify each note.
<a href="https://github.com/expressionsofchange/nerf1/blob/e7a74705c7de/dsn/s_expr/clef_address.py#L154">Implementation</a>
and
<a href="https://github.com/expressionsofchange/nerf1/blob/e7a74705c7de/widgets/history.py#L161">calling</a>
<a href="https://github.com/expressionsofchange/nerf1/blob/e7a74705c7de/widgets/ic_history.py#L322">locations</a></p>
<p>The formalization of the note-address is implemented in the class <a href="https://github.com/expressionsofchange/nerf1/blob/4e519b03dee1d5e166565519b1c910afdce8d19c/dsn/s_expr/note_address.py#L6"><code class="language-plaintext highlighter-rouge">NoteAddress</code></a></p>
<p>The intuition here is: when the whole history is written out as an expression,
the address of a particular note is a path trough that expression. An example
could be: of the global score, take the 6th item; of that item take the only
child, of that item again take the only child. The 2 main possible parts of such
paths are: the <em>nth</em> item of a Score, and the only child. <a href="https://github.com/expressionsofchange/nerf1/blob/4e519b03dee1d5e166565519b1c910afdce8d19c/doctests/note_address.txt">The
doctests</a>
provide further details.</p>
<h3 id="push-global-noteaddress-to-the-tree">Push global NoteAddress to the tree</h3>
<p>In the second step, we construct a tree by playing this global history of notes,
annotated with their global address
(<a href="https://github.com/expressionsofchange/nerf1/blob/e7a74705c7de/widgets/history.py#L163">here</a>
and
<a href="https://github.com/expressionsofchange/nerf1/blob/e7a74705c7de/widgets/ic_history.py#L324">here</a>).
We use the regular mechanism of playing a score to get a tree (<a href="https://github.com/expressionsofchange/nerf1/blob/e7a74705c7de/dsn/s_expr/clef_address.py#L167">This
one</a>
– in fact, it’s not 100% identical for implementation reasons, as documented
in the code, but in terms of behavior it is). The only difference is: because the
input Notes have now been annotated with a global address, the scores as
constructed at each sub-expression in the resulting tree are now consisting of
notes which have a global address. This means that when we fetch the “local
score” (the score to be rendered) we have information about the global address
of each note.</p>
<h3 id="texture-addresses">Texture-addresses</h3>
<p>Finally, we make sure to keep the annotations around in each step of the
conversion to textures, as well as add conversion-specific information when
needed. The implementations of this final step are unique for each of the two
different styles of rendering.</p>
<h4 id="els18-style-rendering">ELS’18 style rendering</h4>
<p>In the case rendering of in the style of the ELS’18 paper, the tree of notes is
first converted to an s-expr, and these s-expressions are then converted to the
actual textures with locations.</p>
<p>We need step-specific address information for each of these steps. When converting to
an s-expression, we annotate the elements that are specific to the fact that the
note is being rendered as an s-expression (i.e. the fact that the Note’s fields
and its type, when converted to an s-expression, turn into particular
further s-expressions). <a href="https://github.com/expressionsofchange/nerf1/blob/e7a74705c7de/dsn/s_expr/clef_address.py#L31">Let’s consider the case of <code class="language-plaintext highlighter-rouge">become-atom</code> as an
example</a>:
when the note <code class="language-plaintext highlighter-rouge">(become-atom foo)</code> is represented as an s-expression the whole
s-expression is annotated as representing the whole note (by not providing any
further annotation), the atom <code class="language-plaintext highlighter-rouge">become-atom</code> is annotated as being the name of
the note, and the atom <code class="language-plaintext highlighter-rouge">foo</code> is annotated as being the field <code class="language-plaintext highlighter-rouge">atom</code> of that
note.</p>
<p>When converting these s-expressions to textures similar further annotations
are necessary. For example: a list-expression is rendered as 2 textures, one
for <a href="https://github.com/expressionsofchange/nerf1/blob/e7a74705c7de/widgets/history.py#L348">each</a> <a href="https://github.com/expressionsofchange/nerf1/blob/e7a74705c7de/widgets/history.py#L361">bracket</a></p>
<p>A particular property of this style of rendering histories, is that the
recursive nature of the histories is preserved in the rendering. That is: a note
may contain further notes; when the note is rendered, the notes it contains are
also rendered.</p>
<p>With regards to the assignment of addresses to textures, the implication is
straightforward: each rendered note is assigned with the address of that
particular note.</p>
<p>An example is drawn below: if the chord below is the item at position 1 in some
other history, the children of that chord are at some subpath.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>(chord ((insert 0 (become-list)) (extend 0 (insert 0 (become-list)))))
^ ^ ^
| | |
(@1) (@1, @0) (@1, @1)
</code></pre></div></div>
<p>The effect of this approach on the animation is precisely as intended: when
switching from a larger context to a smaller one, the “surrounding” notes that
are not applicable in the smaller context float out of view; but those that are
applicable in both views (the inner ones), float from their old position on the
screen to the new one. (The reverse applies when switching from a smaller
context to one surrounding it)</p>
<p><img src="/images/movies/transitions-clef-design.gif" alt="Surrounding context disappears." /></p>
<h4 id="ic-history">IC History</h4>
<p>Another way of rendering notes is by rendering them “in their structural
context”. That is: by showing their effect on the existing structure on which
they are being played. This is how <em>diffs</em> are traditionally displayed.</p>
<p>In this view, the recursive nature of notes is not made explicit. For each note
in some list of notes (for example: those that make up a single score), the
effect of each indivual note on a structure are grouped together. The fact each
such note may itself be composed of any number of other notes is left implict.</p>
<p>Thus, when switching from a larger historical context to a smaller one, it is
not the case that some surrounding <em>notes</em> disappear, while notes contained by
them remain in view.</p>
<p>There simply is no direct rendering of notes in this view: everything that is
rendered is a structure and some effects on that structure. This means that any
addressing must also apply to such structures. And that any floating of related
elements is always floating of some structural element.</p>
<p>It is at this structural level that a similar effect as in the above, of
surrounding context disappearing, can be seen: when switching to a smaller
structural context, less surrounding structure is shown in the in-context
rendering of history, and vise versa for switching to a larger, surrounding,
context:</p>
<p><img src="/images/movies/transition-9.gif" alt="Transition of '9'." /></p>
<p>The implentation details are in the implementing class,
<a href="https://github.com/expressionsofchange/nerf1/blob/4e519b03dee1/dsn/s_expr/in_context_display.py#L45"><code class="language-plaintext highlighter-rouge">ICHAddress</code></a>.</p>
<p>The mixing of ‘construction’ and ‘structure’ is reflected in the address of the
rendered elements; each rendered element is denoted first by the note which it
represents (in terms of a <code class="language-plaintext highlighter-rouge">NoteAddress</code>), and second by an address (<code class="language-plaintext highlighter-rouge">t_address</code>,
for stability over time) in the tree. (further steps in the rendering chain add
further details, i.e. <code class="language-plaintext highlighter-rouge">icd_specific</code> and <code class="language-plaintext highlighter-rouge">render_specific</code>)</p>
<p>One final caveat: the NoteAddress <code class="language-plaintext highlighter-rouge">NoteAddress</code> part of this <code class="language-plaintext highlighter-rouge">ICHAddress</code> is
always the address of the <a href="https://github.com/expressionsofchange/nerf1/blob/4e519b03dee1/widgets/ic_history.py#L91">deepest</a> (leaf-most) possible note. For example, when
rendering the note <code class="language-plaintext highlighter-rouge">(extend 0 (insert 0 (become-list)))</code>, the address of
<code class="language-plaintext highlighter-rouge">(become-list)</code> is used in the <code class="language-plaintext highlighter-rouge">ICHAddress</code>. This ensures we have a singular
identity across context-switches. (It is only this deepest NoteAddress that can
be relied on to always be availalble).</p>
<h3 id="the-animation">The animation</h3>
<p>The actual animation is rather straightforward: do a linear interpolation for
(source, target) for the attributes (x, y, alpha).</p>
<p>We set a clock at an interval (I’ve set 1/60, but I’m not actually getting this
<em>at all</em> on my local machine). Kivy will tell you how much time has actually
passed since the last tick. We then calculate the fraction <code class="language-plaintext highlighter-rouge">dt / remaining_time</code>.
This approach is automatically robust for missed frames (i.e. the missed frame
will not be rendered, but the total animation time and the position of the
texture at the next frame are unaffected)</p>
<p><a href="http://www.expressionsofchange.org/animating-implementation/">Animating history: Implementation</a> was originally published by Klaas van Schelven at <a href="http://www.expressionsofchange.org">Expressions of Change</a> on October 16, 2018.</p></content>
</entry>
<entry>
<title type="html"><![CDATA[Animating history transitions]]></title>
<link rel="alternate" type="text/html" href="http://www.expressionsofchange.org/animating-history-transitions/" />
<id>http://www.expressionsofchange.org/animating-history-transitions</id>
<published>2018-06-01T00:00:00+02:00</published>
<updated>2018-06-01T00:00:00+02:00</updated>
<author>
<name>Klaas van Schelven</name>
<uri>http://www.expressionsofchange.org</uri>
<email>klaas@vanschelven.com</email>
</author>
<content type="html"><p>In <em>Expressions of Change</em> we take modifications to programs to be the primary building block of program construction. We do this in the expectation that that the availability of well-structured historic information across our toolchain will prove invaluable when facing the typical challenges of program modification.</p>
<p>One of the first such benefits is the ability to inspect history at any level of program granularity as it pertains to that level. That is: if a program consists of e.g. modules, classes, functions, statements and expressions, to be able to inspect the history of each of these, and to see how these histories relate.</p>
<p>Over the past few weeks I’ve been working on a way to visualize this central idea, under the working titles of “animations” or “transitions”. Here you can see it in action:</p>
<p><img src="/images/movies/transitions-in-context.gif" alt="Full demo with &quot;in context&quot; historical view." /></p>
<p>How should we understand the two panels shown above? The panel on the right represents program <em>structure</em>, i.e. what we usually think of as “the program”. The structure under consideration is an <a href="/introducing-the-editor/#structure-s-expressions">s-expression</a>, recursively defined as either a list-expression of further s-expressions between parentheses <code class="language-plaintext highlighter-rouge">(</code> and <code class="language-plaintext highlighter-rouge">)</code>, or an atom.</p>
<p>The panel on the left represents a historic overview of the program’s <em>construction</em>: something we usually think of as “Version Management”, although in <em>Expressions of Change</em> we avoid that term, because we view the history of the program as at least equally as important as the program itself.</p>
<p>In the above, the representation of construction is “in its structural context”. That is: each line shows how a certain modification affects the existing structure: parts of the structure that already existed before the modification are shown in grey, parts that are added by the modification are shown in black.<sup id="fnref:not-diff"><a href="#fn:not-diff" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> (Deletions would be shown in red, but the example above contains no deletions). The single line with inverted colors denotes the cursor in the historic view.</p>
<p>Importantly, the two panels are connected: whenever we select some <nobr>s-expression</nobr> in the structural view on the right, the panel on the left follows suit, and shows the history of that structure.<sup id="fnref:always-follow"><a href="#fn:always-follow" class="footnote" rel="footnote" role="doc-noteref">2</a></sup></p>
<p>In the above, we select the expression <code class="language-plaintext highlighter-rouge">(+ (* 6 9) 12)</code>, its sub-expression <code class="language-plaintext highlighter-rouge">(* 6 9))</code> and the atom <code class="language-plaintext highlighter-rouge">9</code> respectively, each time showing the relevant histories.</p>
<h3 id="animations--transitions">Animations / transitions</h3>
<p>The innovation from the past few weeks is that, whenever we switch between the display of one history and another, we show the relationship between the two using an animation.</p>
<p>The simplest example of such a relationship is the one between the history of the atom <code class="language-plaintext highlighter-rouge">9</code> (a trivial history, consisting only of its creation) and the history of the whole expression <code class="language-plaintext highlighter-rouge">(+ (* 6 9) 12)</code>. When we switch from the former to the latter, we see two additional types of information: first, we see that the creation of the <code class="language-plaintext highlighter-rouge">9</code> is preceded by the construction of the rest of the expression beginning with the creation of the list <code class="language-plaintext highlighter-rouge">()</code> and ending with the addition of the atom <code class="language-plaintext highlighter-rouge">12</code>. Second, we see where it fits into the greater expression, i.e. to the right of the <code class="language-plaintext highlighter-rouge">6</code> and left of a closing bracket. Here this single transition is singled out:</p>
<p><img src="/images/movies/transition-9.gif" alt="Transition of '9'." /></p>
<p>The transition between the histories of <code class="language-plaintext highlighter-rouge">(+ (* 6 9) 12)</code> and the sub-expression <code class="language-plaintext highlighter-rouge">(* 6 9))</code> forms a more exiting example: we can see how each of the 4 elements of the latter relate to the history of the former, and how they fit into a larger context. Again, we show only the single transition here:</p>
<p><img src="/images/movies/transition-star-6-9.gif" alt="Transition of '(* 6 9)'." /></p>
<h3 id="why-bother">Why bother?</h3>
<p>I believe these animations are useful for two separate reasons. First, when explaining the concepts of <em>Expressions of Change</em>, a (moving) picture says more than a 1000 words. In this case, the point that’s being explained visually is the idea of “history at any level”.</p>
<p>Second, in an editor for actual use, the animations serve as a visual aid: transitioning smoothly from one history to the next really helps you keep your bearings while navigating.</p>
<p>By the way, the animations in the <em>gifs</em> shown in the above are slowed down to a full 1.5 seconds for both dramatic effect and to ease understanding of what’s happening in the context of a blog-post. In an actual edit environment we’ll want to balance between too fast and too slow: the former makes it impossible to see what’s going on, the latter leads to annoying waits. I found personally found 0.5 seconds per animation to be a good trade-off:</p>
<p><img src="/images/movies/normal-speed.gif" alt="Transitions at normal speed." /></p>
<h3 id="explicit-notes">“Explicit” notes</h3>
<p>The example animations shown in this blog-post correspond directly with the main example from the paper <a href="/assets/papers/clef-design.pdf">“Clef design”</a>.</p>
<p>In that paper, we used a more explicit notation for the modifications (called “notes”): those were themselves modelled as s-expressions, rather than displayed by showing how they affect some structure.<sup id="fnref:notational-difference"><a href="#fn:notational-difference" class="footnote" rel="footnote" role="doc-noteref">3</a></sup></p>
<p>If we apply the ideas of this blog-post to the notation from the paper, it looks like this:</p>
<p><img src="/images/movies/transitions-clef-design.gif" alt="Full demo with &quot;Clef Design&quot; historical view." /></p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:not-diff">
<p>One might even be tempted to call this a “diff”, but that would suggest a primacy of <em>structure</em> rather than <em>construction</em>, i.e. it somewhat implies there are two structures first, and we simply calculate the difference. <a href="#fnref:not-diff" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
</li>
<li id="fn:always-follow">
<p>In the above the view of construction always follows the structure’s cursor. In practical situations it’s probably useful to be able to toggle this following of the cursor on or off, retaining the ability to see the history of sub-expressions in their larger context as needed. <a href="#fnref:always-follow" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
</li>
<li id="fn:notational-difference">
<p>In fact, the <em>Clef</em> in the demonstrations above differs from the <em>Clef</em> in the paper in one way: <code class="language-plaintext highlighter-rouge">(insert ...</code> and <code class="language-plaintext highlighter-rouge">(delete ...</code> taking a single note, rather than a score, as an argument. <a href="#fnref:notational-difference" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
</li>
</ol>
</div>
<p><a href="http://www.expressionsofchange.org/animating-history-transitions/">Animating history transitions</a> was originally published by Klaas van Schelven at <a href="http://www.expressionsofchange.org">Expressions of Change</a> on June 01, 2018.</p></content>
</entry>
<entry>
<title type="html"><![CDATA[Source code published]]></title>
<link rel="alternate" type="text/html" href="http://www.expressionsofchange.org/source-code-published/" />
<id>http://www.expressionsofchange.org/source-code-published</id>
<published>2018-05-05T00:00:00+02:00</published>
<updated>2018-05-05T00:00:00+02:00</updated>
<author>
<name>Klaas van Schelven</name>
<uri>http://www.expressionsofchange.org</uri>
<email>klaas@vanschelven.com</email>
</author>
<content type="html"><p>We can distinguish 2 projects, “<a href="https://github.com/expressionsofchange/nerf0">nerf0</a>” and “<a href="https://github.com/expressionsofchange/nerf1">nerf1</a>”<sup id="fnref:nerf-explained"><a href="#fn:nerf-explained" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>
<p>The brief summary of which project to refer to is:</p>
<h3 id="nerf-0">Nerf 0:</h3>
<ul>
<li>Static Analysis (first steps)</li>
<li>mini-interpreter</li>
<li>Alternative Clef, in which a note <code class="language-plaintext highlighter-rouge">Replace</code> may replace a given node with one that can be constructed from an arbitrary history (not just a history that’s the result of extending the given history) – this approach also has consequences on how undo may be modelled.</li>
<li>merging (“weaving”) of 2 histories (first steps only)</li>
</ul>
<h3 id="nerf-1">Nerf 1:</h3>
<ul>
<li>the most up to date Clef.</li>
<li>work on visualising modifications in a more human friendly way</li>
</ul>
<p>Nerf1 was created by “scavenging” code from nerf0; The primary goal: to adhere much more closely to the Clef that is presented in the paper “Clef Design”, as presented on ELS ‘18. (In fact, the Clef in this project differs from the presented Clef in one important way: <code class="language-plaintext highlighter-rouge">Insert</code> &amp; <code class="language-plaintext highlighter-rouge">Delete</code> take a single note, rather than a score, as an argument) This is the program that was used in the demo for the presentation at ELS.</p>
<p>Both projects should be seen as sketches, as bases for experiments, more so than as a finished product that is in any sense ready for production.</p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:nerf-explained">
<p>“nerf”, the Dutch word for wood grain, is yet another metaphore of how the mechanism of growth has a direct effect on the grown product. <a href="#fnref:nerf-explained" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
</li>
</ol>
</div>
<p><a href="http://www.expressionsofchange.org/source-code-published/">Source code published</a> was originally published by Klaas van Schelven at <a href="http://www.expressionsofchange.org">Expressions of Change</a> on May 05, 2018.</p></content>
</entry>
<entry>
<title type="html"><![CDATA[Talk at the European Lisp Symposium]]></title>
<link rel="alternate" type="text/html" href="http://www.expressionsofchange.org/clef-design-marbella-talk-live/" />
<id>http://www.expressionsofchange.org/clef-design-marbella-talk-live</id>
<published>2018-04-19T00:00:00+02:00</published>
<updated>2018-04-19T00:00:00+02:00</updated>
<author>
<name>Klaas van Schelven</name>
<uri>http://www.expressionsofchange.org</uri>
<email>klaas@vanschelven.com</email>
</author>
<content type="html"><p>The video of the talk at The European Lisp Symposium is live.</p>
<p class="videoWrapper">
<iframe width="680" height="382" src="https://www.youtube.com/embed/qHVrKQvFODI" frameborder="0" allowfullscreen=""></iframe>
</p>
<p>The associated paper may be downloaded using the link below:</p>
<p><a href="/assets/papers/clef-design.pdf">Clef design: Thoughts on the Formalization of Program Construction</a></p>
<p>The relevant bibtex entry may be found below.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@INPROCEEDINGS{VanSchelven2018,
author = "Klaas van Schelven",
pages = {"94-101"},
title = {Clef Design, Thoughts on the Formalization of Program Construction},
booktitle = {Proceedings of the 11th European Lisp Symposium},
address = {Marbella, Spain},
isbn = {978-1-4503-5183-6},
year = {2018},
doi = {10.5281/zenodo.3263960}
}
</code></pre></div></div>
<p><a href="http://www.expressionsofchange.org/clef-design-marbella-talk-live/">Talk at the European Lisp Symposium</a> was originally published by Klaas van Schelven at <a href="http://www.expressionsofchange.org">Expressions of Change</a> on April 19, 2018.</p></content>
</entry>
<entry>
<title type="html"><![CDATA[ELS 2018: Paper accepted]]></title>
<link rel="alternate" type="text/html" href="http://www.expressionsofchange.org/european-lisp-symposium-paper-accepted/" />
<id>http://www.expressionsofchange.org/european-lisp-symposium-paper-accepted</id>
<published>2018-03-20T00:00:00+01:00</published>
<updated>2018-03-20T00:00:00+01:00</updated>
<author>
<name>Klaas van Schelven</name>
<uri>http://www.expressionsofchange.org</uri>
<email>klaas@vanschelven.com</email>
</author>
<content type="html"><p>The European Lisp Symposium takes place on April 16th - April 17th in Marbella, Spain.</p>
<p>The paper may be downloaded using the link below:</p>
<p><a href="/assets/papers/clef-design.pdf">Clef design: Thoughts on the Formalization of Program Construction</a></p>
<p>The relevant bibtex entry may be found below.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@INPROCEEDINGS{VanSchelven2018,
author = "Klaas van Schelven",
pages = {"94-101"},
title = {Clef Design, Thoughts on the Formalization of Program Construction},
booktitle = {Proceedings of the 11th European Lisp Symposium},
address = {Marbella, Spain},
isbn = {978-1-4503-5183-6},
year = {2018},
doi = {10.5281/zenodo.3263960}
}
</code></pre></div></div>
<p><a href="http://www.expressionsofchange.org/european-lisp-symposium-paper-accepted/">ELS 2018: Paper accepted</a> was originally published by Klaas van Schelven at <a href="http://www.expressionsofchange.org">Expressions of Change</a> on March 20, 2018.</p></content>
</entry>
<entry>
<title type="html"><![CDATA[Talk at Clojure Meetup]]></title>
<link rel="alternate" type="text/html" href="http://www.expressionsofchange.org/clojure-meetup/" />
<id>http://www.expressionsofchange.org/clojure-meetup</id>
<published>2018-03-14T00:00:00+01:00</published>
<updated>2018-03-14T00:00:00+01:00</updated>
<author>
<name>Klaas van Schelven</name>
<uri>http://www.expressionsofchange.org</uri>
<email>klaas@vanschelven.com</email>
</author>
<content type="html"><p>The audience was very much involved, which led to some interesting discussions.</p>
<p>Unfortunately, no recording of the screen was made while giving the demo; the resulting resolution is probably not good enough to follow along; I’ve kept this section in anyway to preserve the discussion with the audience.</p>
<p>A video of this event is posted below.</p>
<p class="videoWrapper">
<iframe width="680" height="382" src="https://www.youtube.com/embed/pFJ4-fTmCgU" frameborder="0" allowfullscreen=""></iframe>
</p>
<p><a href="http://www.expressionsofchange.org/clojure-meetup/">Talk at Clojure Meetup</a> was originally published by Klaas van Schelven at <a href="http://www.expressionsofchange.org">Expressions of Change</a> on March 14, 2018.</p></content>
</entry>
<entry>
<title type="html"><![CDATA[Don't say “Homoiconic”]]></title>
<link rel="alternate" type="text/html" href="http://www.expressionsofchange.org/dont-say-homoiconic/" />
<id>http://www.expressionsofchange.org/dont-say-homoiconic</id>
<published>2018-03-01T00:00:00+01:00</published>
<updated>2018-03-01T00:00:00+01:00</updated>
<author>
<name>Klaas van Schelven</name>
<uri>http://www.expressionsofchange.org</uri>
<email>klaas@vanschelven.com</email>
</author>
<content type="html"><p>What is homoiconicity then? Typical definitions state that it is simply “code as data”, will point to a relationship between a program’s structure and syntax or note that the program source is expressed in a primitive data-type of the language. In the below, we will show that none of these definitions make much sense.</p>
<p>Before that, however, we’ll return to the original definition, to ensure we have at least some sensible frame of reference for the rest of the article. This is also the definition we’ll return to in the final section, when we put it to the test by examining the canonical example of homoiconicity.</p>
<h2 id="historic-definition">Historic definition</h2>
<p>The term homoiconic was first coined by Calvin Mooers in the article <a href="https://dl.acm.org/citation.cfm?doid=800197.806048">TRAC, A Text-Handling Language</a> (L. Peter Deutsch was the other author of that article, but he has made it clear to me that Mooers was the originator of the term and Deutsch’s contribution was only on the technical side).<sup id="fnref:trac-footnote"><a href="#fn:trac-footnote" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> In the original article, the meaning of homoiconicity seems to be spelled out quite clearly. The article opens by stating that the TRAC language is homoiconic, although without yet using that term explicitly:</p>
<blockquote>
<p>The external and internal forms of the TRAC language are the same.</p>
</blockquote>
<p>Before introducing the term itself, the importance of a single representation for viewing and manipulation by the user and interpretation by the computer is repeated no less than 5 times; numbered here in square brackets for clarity:</p>
<blockquote>
<p>One of the main design goals was [1] that the input script of TRAC (what is typed in by the user) should be identical to the text which guides the internal action of the TRAC processor. In other words, [2] TRAC procedures should be stored in memory as a string of characters exactly as the user typed them at the keyboard. [3] If the TRAC procedures themselves evolve new procedures, these new procedures should also be stated in the same script. [..] [4] At any time, it should be possible to display program or procedural information in the same form as the TRAC processor will act upon it during its execution. [5] It is desirable that the internal character code representation be identical to, or very similar to, the external code representation.</p>
</blockquote>
<p>This paragraph finally concludes with the definition itself – seemingly leaving no room for doubt:</p>
<blockquote>
<p>Because TRAC procedures and text have the same representation inside and outside the processor, the term <em>homo-iconic is</em> applicable, from <em>homo</em> meaning the same, and <em>icon</em> meaning representation.</p>
</blockquote>
<h2 id="alternate-definitions">Alternate definitions</h2>
<p>Unfortunately, this rather straightforward definition is not the only one in active use. There is in fact a proliferation of alternative, and entirely misguided, definitions, some of which are quite persistent. It is worth pointing out explicitly why each of them is misguided: if homoiconicity is to have any meaning at all, it’s certainly a good idea to point out what it doesn’t mean.</p>
<h3 id="code-as-data">“Code as data”</h3>
<p>First, any attempt to explain homoiconicity by using the phrase “Code as data” is quite meaningless. The reason is simply that “Code as data” can be used to indicate any of a large number of quite distinct ideas, some of which are listed below:</p>
<ul>
<li>Functions as a first-class citizen.</li>
<li>The fact that a computer in a Von Neumann architecture stores programs and data in the same memory device.</li>
<li>Reflection and metaprogramming.</li>
<li>Homoiconicity.</li>
</ul>
<p>The punchline is in the last bullet in the list: if homoiconicity is “code as data”, and “code as data” is homoiconicity, we’ve simply created a circular definition and explained nothing at all – but with extra potential for confusion because “code as data” may also refer to other ideas.</p>
<h3 id="program-structure--syntax">Program structure &amp; syntax</h3>
<p>The second definition attempts to draw a connection between the program’s structure and its syntax. For example, there’s this formulation which is currently featuring on <a href="https://en.wikipedia.org/wiki/Homoiconicity">Wikipedia</a>:</p>
<blockquote>
<p>Homoiconicity [..] is a property [..] in which the program structure is similar to its syntax</p>
</blockquote>
<p>This is a category error, that is, a similarity is drawn between ideas that exist at two separate levels of abstraction.</p>
<p>Syntax makes a statement <em>about</em> program structure, that is the syntax of a language is the set of rules that defines the combinations of symbols that are considered to be a correctly structured program.</p>
<p>To say that the rules defining the program’s structure are similar to the program’s structure makes no sense. First, since a single syntax defines a potentially infinite set of correctly structured programs, which program are we talking about as a reference for comparison? Second, what would the measure of similarity across these 2 levels of abstraction be?</p>
<p>Similar confusion surrounding the terminology of syntax and structure can be found further down in the same article:</p>
<blockquote>
<p>If a language is homoiconic, it means that the language text has the same structure as its abstract syntax tree (AST)</p>
</blockquote>
<p>This is also wrong, but for the opposite reason: the abstract syntax tree is by definition a representation of the structure of the language text. This is the case for any AST, in any language. Thus, to say that this has anything to do with homoiconicity is quite meaningless.</p>
<h3 id="meta-circular-evaluator">Meta-circular evaluator</h3>
<p>Third, the property of homoiconicity is often conflated with the existence of a meta-circular evaluator, but the two concepts are entirely separate. An example of such confusion is the following quote from the Wikipedia article:</p>
<blockquote>
<p>A typical demonstration of homoiconicity is the meta-circular evaluator.</p>
</blockquote>
<p>To understand this claim, let’s first examine what a meta-circular evaluator is. The term was coined originally in 1972 by John C. Reynolds in his paper <a href="http://people.cs.uchicago.edu/~blume/classes/aut2008/proglang/papers/definterp.pdf">Definitional Interpreters for Higher-Order Programming Languages</a>.<sup id="fnref:sicp-meta-circular"><a href="#fn:sicp-meta-circular" class="footnote" rel="footnote" role="doc-noteref">2</a></sup></p>
<blockquote>
<p>We have coined the word “meta-circular” to indicate the basic character of this interpreter: It defines each feature of the defined language by using the corresponding feature of the defining language. For example, when eval is applied to an application expression [..] of the defined language, it evaluates an application expression [..] in the defining language.</p>
</blockquote>
<p>The relationship with homoiconicity, if it exists, is accidental at most.</p>
<p>It is quite possible to construct a meta-circular interpreter in languages which are clearly not homoiconic. Consider for example a Python-interpreter, written in Python, which implements each language-construct from Python in Python: this satisfies the definition of a meta-circular evaluator. However, Python is not homoiconic, because the internal and external forms of the language are quite different: the external representation of Python programs is the text file, and the internal form, on which the interpreter operates, is bytecode.</p>
<h3 id="primitive-data-types">Primitive data types</h3>
<p>Another often seen definition of homoiconicity focuses on the relationship between the <em>primitive data types</em> in the language and the representation of the language. Again, we quote from Wikipedia:</p>
<blockquote>
<p>In a homoiconic language, the primary representation of programs is also a data structure in a primitive type of the language itself.</p>
</blockquote>
<p>Again, this is a definition which doesn’t hold up to any scrutiny.</p>
<p>First, consider that most languages have a primitive datatype for strings of text, and the programs in most languages are presented to the user as a string of text. Should we conclude from this that most languages are homoiconic?</p>
<p>Second, for most languages, it is possible to write a parser in that language itself, that stores the resulting abstract syntax tree in a primitive type in the language. For example: we can write a parser in <em>Java</em> that parses Java source code into Java objects, which are primitive data types in the language. Should we conclude from this that Java is homoiconic?</p>
<p>In both cases, the answer is clearly “no” – if all languages are homoiconic, the terms loses all meaning.</p>
<h2 id="an-example">An example</h2>
<p>Now we know what homoiconicity is not, the way is cleared for a discussion on what it is. Let’s return to the original definition, and try to fit this to an example. Remember, the original definition of homoiconicity centers on <em>a similarity between the internal and external representations of a language</em>. In the literature, Lisps are the favorite example of homoiconicity; we will stick with that tradition here, and examine the case of an arbitrary Lisp program in an arbitray language in the Lisp family.<sup id="fnref:lisp-not-homoiconic"><a href="#fn:lisp-not-homoiconic" class="footnote" rel="footnote" role="doc-noteref">3</a></sup></p>
<p>First, the external representation: the basic syntactical element of Lisps is the s-expression. Thus, a typical program in Lisp is represented to the programmer as nothing more than an s-expression.</p>
<p>Second, the internal representation. The evaluation of a Lisp program is typically defined in terms of those s-expressions directly. One example is the definition of Scheme given by <a href="https://mitpress.mit.edu/sites/default/files/6515.pdf">Abelson &amp; Sussman</a>, another is the definition <a href="/l-a-toy-language/#case-analysis">given on this site</a>. In both cases, the semantics of the language are given in terms of a case-analysis on an s-expression. Thus, the (virtual) machine operates on s-expressions directly, and we can say the internal representation of the program is an s-expression.</p>
<p>Because the internal and external representations are the same, we might say that this combination of language and interpreter is homoiconic.</p>
<h3 id="objections">Objections</h3>
<p>The observant reader may raise at least two objections against the above.</p>
<p>The first objection concerns the external representation. In the above we simply stated that the external representation is an s-expression. In most practical programming environments, however, the actual representation of program sources is as text files which contain strings of characters. It is only after parsing this text that the representation is really an s-expression. In other words: in practical environments the external representation is not an s-expression, but text.</p>
<p>The second objection concerns the internal representation. Practical implementations of Lisp interpreters do generally not operate actually directly on s-expressions internally for performance reasons. Even though a Lisp might be defined in terms of a case-analysis on s-expressions, it is not usually implemented as such. Thus, the internal representation is not actually an s-expression in practice.</p>
<h3 id="para-iconic">Para-iconic</h3>
<p>In an attempt to counter these arguments, we might slightly alter our definition, to state that homoiconicity is not a boolean property, but a scalar one: a language in a given environment can be homoiconic to some degree.</p>
<p>In fact, the original definition by Mooers and Deutsch leaves some space for such an interpretation when it states that “the internal [..] representation be identical to, <em>or very similar to</em>, the external [..] representation.” (emphasis mine).</p>
<p>A more precise term for such “partial homoiconicity”, could perhaps be para-iconicity or simula-iconicity. However, that would be introducing yet another term into an already confused vocabulary. In any case, given this scalar interpretation, we can see that the example above is indeed homoiconic to a very large degree:</p>
<p>Regarding the external representation: parsing of s-expressions is trivial, as is the reverse operation of converting s-expressions to some printable form. The use of explicit brackets for all list-expressions ensures that the structure of s-expressions is always explicitly visible. Thus, to say that the external representation of a Lisp program is an s-expression holds at least some truth.</p>
<p>Regarding the internal representation: even though an actual Lisp interpreter might be optimized, its operational semantics are defined in terms of a virtual machine that operates on s-expressions. Any properly implemented optimization will have the same behavior (barring performance) as its non-optimized counterpart. Thus, from the perspective of the programmer the internal representation is still the s-expression.<sup id="fnref:what-is-internal"><a href="#fn:what-is-internal" class="footnote" rel="footnote" role="doc-noteref">4</a></sup> Thus, to say that the internal representation is an s-expression holds some truth as well.</p>
<p>For comparison with a “less homoiconic” language, we could consider the example of compiled <em>C</em>: the external representation of a <em>C</em> program is as a piece of program text; the internal representation is in the form of machine code. Strings of text and machine code are quite dissimilar, and to map between them takes considerable effort.</p>
<h2 id="conclusions">Conclusions</h2>
<p>Homoiconicity is a term surrounded by much confusion.</p>
<p>Some of this confusion can be cleared up, by putting aside misguided definitions and returning to the original definition: languages which have the same external and internal representations are homoiconic.</p>
<p>Unfortunately, that doesn’t resolve the issue entirely, because even when sticking to a seemingly sensible definition, we can see a lot of problems with the term: it is almost never the case that the external representation and the internal representation are exactly the same.</p>
<p>We are forced to withdraw to a position of “para-iconicity”: the idea that homoiconicity is a scalar property rather than a boolean one. In that view, a language which has at least some similarity between <em>some</em> external representation and <em>some model</em> for the internal representation could be said to be homoiconic.</p>
<p>However, that’s not a very good position to be in. I’d say it’s much easier to just talk about certain properties of your favorite language directly. For example, “the syntax is easy to parse”, or “the semantics of the language are completely defined on the single page of a book”. That is, there is much to be said in favor of Lisp’s s-expressions without resorting to use such a poorly defined term.</p>
<p>Maybe it’s time to stop saying “homoiconic”.</p>
<h3 id="afterword-l-peter-deutsch-respondsh3">Afterword: L. Peter Deutsch responds&lt;/h3&gt;</h3>
<p>In June 2020, I reached out to L. Peter Deutsch with the request to read and comment on the above. He wrote the following:</p>
<blockquote>
<p>I did not coin this word [homoiconic] – it was 100% the creation of my friend the late
Calvin Mooers, who was the original conceiver of TRAC. My contribution in
working with Calvin was to take his concept, turn it into a rigorous
language definition, and implement it.</p>
<p>Having read through your blog article, I would say that I essentially agree
with your argument. In my view, the root “icon” refers specifically to the
<em>concrete appearance</em> of something. For example, “WYSIWYG” in the context
of text editing is arguably the same idea as “homoiconic” in the context of
programming languages, contrasted to languages like TeX where the document
has two very different appearances (the TeX input file and the output
document). WYSIWYG editors are homo-iconic not simply in the <em>characters</em>
of the document, but also in their <em>formatting</em> (font, spacing, …); but I
think the same concept applies.</p>
<p>There are other homoiconic macro languages: while TRAC may be the earliest
on record (I don’t remember the date or specifications of Strachey’s GPM),
m4, which I use actively as a preprocessor for other languages, also fits
the concept.</p>
</blockquote>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:trac-footnote">
<p>A footnote in that article points out that this was “following suggestion of McCullough W.S., based upon terminology due to Peirce, C.S.” It appears that the “Peirce” in question is Charles Sander Peirce, who wrote extensively on <em>semiotics</em>, which the study of signs, although it is not clear whether and where he used the phrase homo-iconic explicitly. W.S. McCulloch is likely to be Warren Sturgis McCulloch, who was in Cambridge at the time. <a href="#fnref:trac-footnote" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
</li>
<li id="fn:sicp-meta-circular">
<p>The definition in the influential <a href="https://mitpress.mit.edu/sites/default/files/6515.pdf">Structure and Interpretation of Computer Programs</a>, differs somewhat: “An evaluator that is written in the same language that it evaluates is said to be metacircular.” Although the constraint that each feature of the defined language is implemented in using the corresponding feature of the defined language is not repeated in that definition, the metacircular evaluator as defined in the book satisfies it in practice. <a href="#fnref:sicp-meta-circular" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
</li>
<li id="fn:lisp-not-homoiconic">
<p>In fact, the original article <em>excludes</em> Lisp as an example of homoiconicity, but only because Lisp had not settled on a single representation in terms of s-expressions at the time of writing: “Finally, LISP is troubled with a dual language problem: an M-language, which is easy to read, and is used externally, and an S-language, with which the LISP processor operates, and which is usable externally only by the hardened initiates. It should be noted here that were the S-language the only LISP language, LISP would be close to being homo-iconic (excluding the machine-language functions).” <a href="#fnref:lisp-not-homoiconic" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
</li>
<li id="fn:what-is-internal">
<p>Taking this view to its ultimate consequence, however, raises even further questions around the concept of homoiconicity: for a well-encapsulated machine, we cannot observe its inner workings <em>by definition</em>; in that view, making any statement about the internal representation of the machine is meaningless. More generally, the original definition has the problem that the idea that there is a single external and a single internal representation of the program does not match with reality. In fact, there is a whole chain of representations, including electrons in the brain of the programmer, photons emitted from the screen, program text, machine code, and electrons moving in the CPU. <a href="#fnref:what-is-internal" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
</li>
</ol>
</div>
<p><a href="http://www.expressionsofchange.org/dont-say-homoiconic/">Don't say “Homoiconic”</a> was originally published by Klaas van Schelven at <a href="http://www.expressionsofchange.org">Expressions of Change</a> on March 01, 2018.</p></content>
</entry>
<entry>
<title type="html"><![CDATA[Lambda Days 2018]]></title>
<link rel="alternate" type="text/html" href="http://www.expressionsofchange.org/lambda-days-2018-lightning-talk/" />
<id>http://www.expressionsofchange.org/lambda-days-2018-lightning-talk</id>
<published>2018-02-26T00:00:00+01:00</published>
<updated>2018-02-26T00:00:00+01:00</updated>
<author>
<name>Klaas van Schelven</name>
<uri>http://www.expressionsofchange.org</uri>
<email>klaas@vanschelven.com</email>
</author>
<content type="html"><p><a href="http://www.lambdadays.org/lambdadays2018">Lambda Days</a> is a “one of a kind experience in the functional world.”</p>
<p>It was held on 22 &amp; 24 February 2018 in Kraków, Poland.</p>
<p class="videoWrapper">
<iframe width="680" height="382" src="https://www.youtube.com/embed/KA2TjX595tA?cc_load_policy=1&amp;cc_lang_pref=en" frameborder="0" allowfullscreen=""></iframe>
</p>
<p><a href="http://www.expressionsofchange.org/lambda-days-2018-lightning-talk/">Lambda Days 2018</a> was originally published by Klaas van Schelven at <a href="http://www.expressionsofchange.org">Expressions of Change</a> on February 26, 2018.</p></content>
</entry>
<entry>
<title type="html"><![CDATA[Expressions of Change in under 15 minutes]]></title>
<link rel="alternate" type="text/html" href="http://www.expressionsofchange.org/expressions-of-change-in-under-15-minutes/" />
<id>http://www.expressionsofchange.org/expressions-of-change-in-under-15-minutes</id>
<published>2018-02-20T00:00:00+01:00</published>
<updated>2018-02-20T00:00:00+01:00</updated>
<author>
<name>Klaas van Schelven</name>
<uri>http://www.expressionsofchange.org</uri>
<email>klaas@vanschelven.com</email>
</author>
<content type="html"><p>This video started out as a practice run for a 3 minute lightning talk.</p>
<p>I didn’t make it in 3 minutes.</p>
<p>On the upside, there’s about 5 times more information in this talk than in the lightning talk it was a practice run for.</p>
<p class="videoWrapper">
<iframe width="680" height="382" src="https://www.youtube.com/embed/oJAZ7gBFf0U?cc_load_policy=1&amp;cc_lang_pref=en" frameborder="0" allowfullscreen=""></iframe>
</p>
<p><a href="http://www.expressionsofchange.org/expressions-of-change-in-under-15-minutes/">Expressions of Change in under 15 minutes</a> was originally published by Klaas van Schelven at <a href="http://www.expressionsofchange.org">Expressions of Change</a> on February 20, 2018.</p></content>
</entry>
<entry>
<title type="html"><![CDATA[Constructing S-Expressions]]></title>
<link rel="alternate" type="text/html" href="http://www.expressionsofchange.org/constructing-s-expressions/" />
<id>http://www.expressionsofchange.org/constructing-s-expressions</id>
<published>2017-12-11T00:00:00+01:00</published>
<updated>2017-12-11T00:00:00+01:00</updated>
<author>
<name>Klaas van Schelven</name>
<uri>http://www.expressionsofchange.org</uri>
<email>klaas@vanschelven.com</email>
</author>
<content type="html"><p>Putting the methods of construction more central does not mean we can ignore the programs that are constructed by them. For example: when we edit a program, the programmer is still presented with an actual program on the screen, when we evaluate a program, it is a single particular program that’s being evaluated rather than a history of programs etc.</p>
<p>In short: an essential piece of any set of tools that takes changes to programs as its point of departure is to actually construct programs from those changes. When compared to toolchains which simply store already-constructed programs this is quite obviously an additional task. If only for that reason, it is worthwhile to consider its implementation and performance characteristics.</p>
<p>In the below we present an algorithm to efficiently construct an s-expression out of a previous s-expression and a single modification. Before we turn our attention to that algorithm, we shall be a bit more precise about the properties of the data structure to be created.</p>
<h3 id="immutability-of-input">Immutability of input</h3>
<p>The presented algorithm is a pure function of an s-expression and a mutation to a new s-expression. In other words: the s-expression that serves as our input is not modified in-place. Any of its parts that are required in the output are copied as necessary. The usual advantages of functional programming thus apply, i.e the function is easier to test and reason about, thread-safe by default, and trivial to memoize.</p>
<p>In the present project there is one more advantage to this approach: it provides for a trivial mechanism to construct a history of structures, one for each modification. Given the core assumptions of the present project, such a structure is extremely useful. To create it we simply keep a reference to each produced structure in a list. An algorithm that is formulated in terms of an in-place modification of a given input does not allow for this approach for the obvious reason that the referenced “historic” objects are not guaranteed to remain unchanged.</p>
<h3 id="data-structure">Data structure</h3>
<p>The list-expressions to be constructed shall be represented in-memory as an array of references to child nodes. This allows for lookup of a child by index in constant time.</p>
<p>A number of alternatives to this design are easily rejected, as detailed below.</p>
<p>An alternative to an array is formed by a linked list. However, representation of the list of children using a linked list has no advantages, only disadvantages. First, because of the choice for an immutable data structure, the potential advantages in performance for updates to the list do not apply. Second, lookup by index, the main method of access, is not in constant time for linked lists.</p>
<p>An alternative to storing references to child nodes is formed by storing the child nodes inline in some serialized format. Such an approach may be useful for serialization on disk or over the wire, but is not sufficiently flexible in the face of modifications: each change to a child tree requires a full reserialization of the whole tree, with performance characteristics in the order of the size of the full tree.</p>
<h2 id="the-algorithm">The algorithm</h2>
<p>Before turning our attention to constructing s-expressions in terms of modifications to previous s-expressions, let us consider the properties of constructing s-expressions <em>per se</em>.</p>
<p>Given an s-expression, construction of an s-expression in the data format outlined above can be formulated recursively: First, construct all child expressions with a recursive call; combine these results by constructing an array of references to them.</p>
<p>This recursive definition can be expressed as a <a href="/catamorphisms-and-change/">catamorphism</a>: the construction of children is independent of their parents. This catamorphism is in fact the most trivial catamorphism that exists: as a whole it is simply the identity function; its algebra is formed by what is basically a trivial data constructor.</p>
<p>In <a href="/catamorphisms-and-change/">an earlier article</a> we observed that in the context of controlled modification (i.e. a well-chosen <em>Clef</em>) for recursive functions which can be described as catamorphisms efficient mechanisms for recalculation of modified versions are automatically available.</p>
<p>Given that construction of s-expressions can be expressed as a catamorphism, construction of s-expressions in terms of modifications to previous s-expressions can be implemented using such automatically available efficient mechanisms with well-understood performance characteristics.</p>
<p><a href="/catamorphisms-and-change/#catamorphisms--controlled-modification">The general mechanism</a>, briefly summarized, is the following: for each modification, the calculation proceeds in two steps: first we calculate the outcome of the catamorphism for the small number of modified children and second we apply the algebra, i.e. we combine the previously calculated outcomes and newly calculated results.</p>
<p>In this case, as noted, the algebra is simply the construction of an array of references to the children.</p>
<p>The performance characteristics can be deduced from the performance characteristics of the algebra. In this case it is easy to see that the algebra is linear in the branching factor of the nodes: constructing an array of references to the children requires one action per child.</p>