MS Word comments from org-mode

| categories: docx, orgmode | tags:

TL;DR:

Today I learned you can make a Word document from org-mode with Word comments in them. This could be useful when working with collaborators maybe. The gist is you use html for the comment, then export to markdown or html, then let pandoc convert those to docx. A comment in HTML looks like this:

<span class="comment-start" author="jkitchin">Comment text</span>The text being commented on <span class="comment-end"></span> 

Let's wrap that in a link for convenience. I use a full display so it is easy to see the comment. I only export the comment for markdown and html export, for everything else we just use the path. We somewhat abuse the link syntax here by using the path for the text to comment on, and the description for the comment.

(org-link-set-parameters
 "comment"
 :export (lambda (path desc backend)
           (if (member backend '(md html))
               (format "<span class=\"comment-start\" author=\"%s\">%s</span>%s<span class=\"comment-end\"></span>"
                       (user-full-name)
                       desc
                       path)
             ;; ignore for other backends and just use path
             path))
 :display 'full
 :face '(:foreground "orange"))                  

Now, we use it like this This is the commentThis is the text commented on.

In org-mode it looks like:

To get the Word doc, we need some code that first exports to Markdown, and then calls pandoc to convert that to docx. Here is my solution to that. Usually you would put this in a subsection tagged with :noexport: but I show it here to see it. Running this block generates the docx file and opens it. Here I also leverage org-ref to get some citations and cross-references.

(require 'org-ref-refproc)
(let* ((org-export-before-parsing-hook '(org-ref-cite-natmove ;; do this first
                                        org-ref-csl-preprocess-buffer
                                        org-ref-refproc))
       (md (org-md-export-to-markdown))
       (docx (concat (file-name-sans-extension md) ".docx")))
  (shell-command (format "pandoc -s %s -o %s" md docx))
  (org-open-file docx '(16)))

The result looks like this in MS Word:

How a comment looks in Word.

That is pretty remarkable. There are some limitations in Markdown, e.g. I find the tables don't look good, not all equations are converted, some cross-references are off. Next we add some more org-features and try the export with HTML.

1. export features for test

Test cross-references, references, equations, etc…

Aliquam erat volutpat (Fig. fig-2). Nunc eleifend leo vitae magna. In id erat non orci commodo lobortis. Proin neque massa, cursus ut, gravida ut, lobortis eget, lacus. Sed diam. Praesent fermentum tempor tellus. Nullam tempus &yang-2022-evaluat-degree. Mauris ac felis vel velit tristique imperdiet. Donec at pede. Etiam vel neque nec dui dignissim bibendum. Vivamus id enim. Phasellus neque orci, porta a, aliquet quis in Table tab-1, semper a, massa. Phasellus purus (eq-1). Pellentesque tristique imperdiet tortor. Nam euismod tellus id erat &kolluru-2022-open-chall.

Table 1: A table.
x y
1 3
3 6

We have equations:

\begin{equation} \label{org9973acf} y = mx + b \end{equation}
  • bullet1
    • nested bullet
  • bullet2

some defintions:

emacs
greatest editor
  1. item 1
  2. item 2

One equation: \(e^{i\pi} - 1 = 0\)

A second equation:

\begin{equation} e^{i\pi} - 1 = 0 \end{equation}

3. Alternate build with HTML.

Here we consider For example, htmlalternate build approaches.

Run this to get the docx file. I find this superior; it has references, cross-references, equations, tables, figures, etc. Even a title.

(let* ((org-export-before-parsing-hook '(org-ref-csl-preprocess-buffer
                                         org-ref-refproc))
       (org-html-with-latex 'dvipng)
       (f (org-html-export-to-html))
       (docx (concat (file-name-sans-extension f) ".docx")))
  (shell-command (format "pandoc -s %s -o %s" f docx))
  (org-open-file docx '(16)))

Copyright (C) 2023 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.5.5

Discuss on Twitter

Using results from one code block in another org-mode

| categories: orgmode, emacs, elisp | tags:

One really great feature in org-mode is you have many options to pass data between code-blocks. In this post we look at some of these options using emacs-lisp as the language. This runs in a session where you can keep variables in memory between blocks, and use them in subsequent blocks.

Here we set a variable to a value.

(setq some-variable 42)
42

Then later in another block we can use that variable:

(+ some-variable 1)
43

While you are in the session, some-variable can be used. If you want some mind-bending trouble, the emacs-lisp session is global, and you can access some-variable even in another buffer! Don't do that. When you close emacs this variable will disappear, and all that is left are the results from above.

There is another way to pass information from one block to another using named src blocks and variables in the block header. This allows you to pass data between blocks by name, and you will see later you can even access the results by name from other files.

1 :var

First, we give our src block a name like this:

#+name: block-1
#+BEGIN_SRC emacs-lisp
(current-time-string)
#+END_SRC

When we run this, the results will have a name too.

(current-time-string)
Tue Feb 12 08:19:23 2019

Now, we can use the named result as input to a new block using the :var header.

#+BEGIN_SRC emacs-lisp :var input=block-1
(format "We got %S in block-1" input)
#+END_SRC

When we run this block, emacs will run block-1 and put the output in to the variable input which we use inside the code block.

(format "We got %S in block-1" input)
We got "Tue Feb 12 08:20:44 2019" in block-1

Some things to note:

  1. Every time you run this, block-1 gets rerun.
  2. The results in this block are not the same as in block-1
  3. The results in block-1 are not changed when you run the second block.

You may not want to rerun block-1 each time; maybe it is an expensive calculation, or maybe it should not be changed. You can prevent this behavior by using the :cache header.

2 :cache

If you specify :cache yes then org-mode should store a hash of the code block with the results, and if the code block hasn't changed then it should not run again.

#+name: block-2
#+BEGIN_SRC emacs-lisp :cache yes
(current-time-string)
#+END_SRC
(current-time-string)
Tue Feb 12 08:06:22 2019

Now, we use block-2 as input to a block, we see the output is the same as the output from block-2.

(format "We got %S in block-2" input)
We got "Tue Feb 12 08:06:22 2019" in block-2

Ok, but what if my results are too large to put in the buffer, or too complex for text? You still have some options.

3 :wrap

Suppose we generate some json in one block, and we want to use it in another block. We still want to see the json in the buffer as an intermediate result. We can wrap the output in a json block like this.

(require 'json)
(json-encode `(("date" . ,(current-time-string))))

{"date":"Tue Feb 12 08:30:20 2019"}

Then, we can simply input that output into a new block.

(format "We got %S in json" input)
We got "{\"date\":\"Tue Feb 12 08:30:20 2019\"}
" in json

This admittedly still pretty simple, text-based data. It is probably not a good idea to do this with binary data.

Note you can refer to this result even in another org-file:

#+BEGIN_SRC emacs-lisp :var input=./2019-02-12.org:json
input
#+END_SRC

#+RESULTS:
: {"date":"Tue Feb 12 08:30:20 2019"}

4 :file

It may be that your data is too large to conveniently put into your org-file, or maybe it is binary data. No problem, just put it into an external file using the :file header. It looks like this:

#+name: block-3
#+BEGIN_SRC emacs-lisp :cache yes :file block-3
(require 'json)
(json-encode `(("date" . ,(current-time-string))))
#+END_SRC

#+RESULTS[a14d376653bd8c40a0961ca95f21d8837dddec66]: block-3
[[file:block-3]]

Note that you have to provide a file name for this. Sometimes that is nice if you want a human recognizable file to send to someone, but it would also be nice if there was an automatic naming scheme, e.g. based on an sha-1 hash of the src block.

(require 'json)
(json-encode `(("date" . ,(current-time-string))))

block-3

Now you can use other tools to check out the file. Here we can still use simple shell tools.

cat block-3

The output of block-3 is a file name:

input
/Users/jkitchin/Box Sync/kitchingroup/jkitchin/journal/2019/02/12/block-3

So you can use it in a new block to read the data in, and then do something new with it.

(with-temp-buffer
  (insert-file-contents input)
  (format "We got %S in block-3" (json-read-from-string (buffer-string))))
We got ((date . "Tue Feb 12 08:46:55 2019")) in block-3

5 "remote" data

The blocks do not have to be in order. If you want, you can put your blocks in an appendix, and then just have analysis blocks here that use them. That way, you can have short blocks here that are more readable, but longer, more complex blocks elsewhere that do not clutter your document.

(with-temp-buffer
  (insert-file-contents input)
  (format "We got %S in the appendix data" (json-read-from-string (buffer-string))))
We got "{\"date\":\"Tue Feb 12 09:11:12 2019\"}" in the appendix data

6 Manually saving data in files

Note you can also manually save data in a file, for example:

(require 'json)
(let ((f "block-4.json"))
  (with-temp-file f
    (prin1
     (json-encode `(("date" . ,(current-time-string))))
     (current-buffer)))
  f)
block-4.json

We put the filename as the last variable which is returned by the block, so that we don't have to manually type it later in the next block. You know, try not to repeat yourself…

This just shows we did write out to our file:

cat block-4.json

And we read the file in here, using the filename from block-4 as an input variable.

(with-temp-buffer
  (insert-file-contents input)
  (format "We got %S in block-4" (json-read-from-string (buffer-string))))
We got "{\"date\":\"Tue Feb 12 08:51:25 2019\"}" in block-4

7 An appendix for data

(require 'json)
(let ((f "appendix.json"))
  (with-temp-file f
    (prin1
     (json-encode `(("date" . ,(current-time-string))))
     (current-buffer)))
  f)
appendix.json

8 Caveats

Using org-mode like this is almost always finding the right tradeoffs in what is persistent, and where is it stored. Not all of the intermediate data/calculations are stored; if they are really cheap you can just run the code blocks again. If they are really small, i.e. easy for your to read in a few lines, you can store them in the document. If they are really large, you can store them in a file.

The beauty of having everything in an org-file is you have a single file that is easy to transport. When the files get too large though, it can become impractical, e.g. emacs may slow down if you try to put thousands of lines of xml data into the buffer. Then, you have to make some decisions about what to keep, where to keep it, and in what form to keep it.

For short projects where you only need a single compute session, having everything in memory may be fine. For longer projects, say one that is long enough you will close all the buffers, and possibly restart emacs in between working on it, then you have to make some decisions about what to save from each block so you can continue the work in the next session. Again, you have to decide what to save, where to save, and in what form.

Once you start saving data outside the org-file, it becomes less portable, or more tricky to move the file because you need to also move all the data files to keep it intact. I have explored a concept of making an org-archive in the past, where you get a list of all files linked in the org-file, but this so far has just been worked out for some small proof of concept ideas.

Not all languages are the same in org-mode. They do not all support sessions for example, and they may not all work like the examples here. The scimax iPython modifications do not behave like the examples above. That is probably due to bugs I have inadvertently introduced, and in the future I will try to make it work like emacs-lisp does above.

Overall, org-mode has one of the most flexible and powerful systems for passing and reusing data in documents I have ever seen. It is not perfect, and in such a powerful system there are many unexplored or lightly traveled corners that may have hazards in them. It still seems pretty promising though.

Copyright (C) 2019 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.2.1

Discuss on Twitter

Getting geo-tagged information from photos for blogging

| categories: orgmode, geotag, emacs | tags:

I am kind of late to this game, but recently I turned on location services for the camera on my phone. That means the location of the photo is stored in the photo, and we can use that to create urls to the photo location in a map for example. While traveling, I thought this would be a good application for org-mode to add functionality to documents with photos in them, e.g. to be able to click on them to see where they are from, or to automate creation of html pages with links to maps, etc. In this post I explore some ways to achieve those ideas. What I would like is a custom org link that shows me a thumbnail of the image, and which exports to show the image in an html file with a link to a pin on Google maps.

So, let's dig in. Imagemagick provides an identify command that can extract the information stored in the images. Here we consider just the GPS information. I some pictures on a recent vacation, and one is unimaginatively named IMG_1759.JPG. Let's see where it was taken.

identify -verbose IMG_1759.JPG | grep GPS
exif:GPSAltitude: 14426/387    
exif:GPSAltitudeRef: 0    
exif:GPSDateStamp: 2018:06:30    
exif:GPSDestBearing: 11767/80    
exif:GPSDestBearingRef: T    
exif:GPSImgDirection: 11767/80    
exif:GPSImgDirectionRef: T    
exif:GPSInfo: 1632    
exif:GPSLatitude: 22/1, 11/1, 614/100
exif:GPSLatitudeRef: N    
exif:GPSLongitude: 159/1, 40/1, 4512/100
exif:GPSLongitudeRef: W    
exif:GPSSpeed: 401/100    
exif:GPSSpeedRef: K    
exif:GPSTimeStamp: 3/1, 44/1, 3900/100

The interpretation here is that I took that photo at latitude 22° 11' 6.14" N, and longitude 159° 40' 45.12" W. Evidently I was moving at 4.01 in some unit; I can confirm that I was at least moving, I was on a ship when I took that picture, and it was moving.

According to this you can make a url to a Google maps pin in satellite picture mode that looks like this: http://maps.google.com/maps?q=22 11 6.14N,159 40 45.12W&t=k. It doesn't seem possible to set the zoom in this url (at least setting the zoom doesn't do anything, and I didn't feel like trying all the other variations that are reported to sometimes work). I guess that is ok for now, it adds some suspense that you have to zoom out to see where the image is in some cases.

We need a little function to take an image file and generate that link. We have to do some algebra on the latitude and longitude which are stored as integers with a division operator. I am going to pipe this through an old unix utility called bc mostly because it is simple, and I won't have to parse it much. bc is a little archaic, you have to set the scale first, which tells it how many decimal places to output. The degrees and minutes are integers, so we will have to deal with that later.

echo "scale=2; 614/100" | bc
6.14

Here is our function. I filter out the lines with GPS in them into an a-list. Then, I grab out the specific quantities I want and construct the url. There is a little hackery since it appears the degrees and minutes should be integers in the url formulation used here, so I convert them to numbers and then take the floor. The function is a little longer than I thought, but it isn't too bad I guess. It is a little repetitious, but not enough to justify refactoring.

(defun iphoto-map-url (fname)
  (let* ((gps-lines (-keep (lambda (line)
                             (when (s-contains? "GPS" line) (s-trim line)))
                           (process-lines "identify" "-verbose" fname)))
         (gps-alist (mapcar (lambda (s) (s-split ": " s t))  gps-lines))
         (latitude (mapcar
                    (lambda (s)
                      (s-trim (shell-command-to-string
                               (format "echo \"scale=2;%s\" | bc" s))))
                    (s-split "," (cadr (assoc "exif:GPSLatitude" gps-alist)))))
         (latitude-ref (cadr (assoc "exif:GPSLatitudeRef" gps-alist)))
         (longitude (mapcar
                     (lambda (s)
                       (s-trim
                        (shell-command-to-string
                         (format "echo \"scale=2;%s\" | bc" s))))
                     (s-split "," (cadr (assoc "exif:GPSLongitude" gps-alist)))))
         (longitude-ref (cadr (assoc "exif:GPSLongitudeRef" gps-alist))))
    (s-format "http://maps.google.com/maps?q=$0 $1 $2$3,$4 $5 $6$7&t=k"
              'elt
              (list
               (floor (string-to-number (nth 0 latitude)))
               (floor (string-to-number (nth 1 latitude)))
               (nth 2 latitude)
               latitude-ref
               (floor (string-to-number (nth 0 longitude)))
               (floor (string-to-number (nth 1 longitude)))
               (nth 2 longitude)
               longitude-ref))))
iphoto-map-url

Here is the function in action, making the url.

(iphoto-map-url "IMG_1759.JPG")
http://maps.google.com/maps?q=22 11 6.14N,159 40 45.12W&t=k

It is kind of slow, but that is because the identify shell command is kind of slow when you run it with the -verbose tag. Now, I would like the following things to happen when I publish it to html:

  1. I want the image wrapped in an img tag inside a figure environment.
  2. I want the image to by hyperlinked to its location in Google maps.

In the org file, I want a thumbnail overlay on it, so I can see the image while writing, and I want it to toggle like other images. I use an iPhone to take the photos, so we will call it an iphoto link.

Here is the html export function I will use. It is a little hacky that I hard code the width in at 300 pixels, but I didn't feel like figuring out how to get it from an #+attr_html line right now. It probably requires a filter function where you have access to the actual org-elements. I put the url to the image location in a figure caption here.

(defun iphoto-export (path desc backend)
  (cond
   ((eq 'html backend)
    (format "<figure>
<img src=\"%s\" width=\"300\">
%s
</figure>"
            path
            (format "<figcaption>%s <a href=\"%s\">map</a></figcaption>"
                    (or desc "")
                    (iphoto-map-url path))))))
iphoto-export

Ok, the last detail I want is to put an image overlay on my new link so I can see it. I want this to work with org-toggle-inline-images so I can turn the images on and off like regular image links with C-c C-x C-v. This function creates overlays as needed, and ties into the org-inline-image-overlays so they get deleted on toggling. We have to advise the display function to redraw these, which we clumsily do by restarting the org font-lock machinery which will redraw the thumbnails from the activate-func property of the links. I also hard code the thumbnail width in this function, when it could be moved out to a variable or attribute.

(defun iphoto-thumbnails (start end imgfile bracketp)
  (unless bracketp
    (when (and
           ;; it is an image
           (org-string-match-p (image-file-name-regexp) imgfile)
           ;; and it exists
           (f-exists? imgfile)
           ;; and there is no overlay here.
           (not (ov-at start)))
      (setq img (create-image (expand-file-name imgfile)
                              'imagemagick nil :width 300
                              :background "lightgray"))
      (setq ov (make-overlay start end))
      (overlay-put ov 'display img)
      (overlay-put ov 'face 'default)
      (overlay-put ov 'org-image-overlay t)
      (overlay-put ov 'modification-hooks
                   (list
                    `(lambda (&rest args)
                       (org-display-inline-remove-overlay ,ov t ,start ,end))))
      (push ov org-inline-image-overlays))))

(defun iphoto-redraw-thumbnails (&rest args)
  (org-restart-font-lock))

;; this redisplays these thumbnails on image toggling
(advice-add 'org-display-inline-images :after 'iphoto-redraw-thumbnails)

Next, we define the link with a follow, export, tooltip and activate-func (which puts the overlay on).

(org-link-set-parameters
 "iphoto"
 :follow (lambda (path) (browse-url (iphoto-map-url path)))
 :export 'iphoto-export
 :help-echo "Click me to see where this photo is on a map."
 :activate-func 'iphoto-thumbnails)

So finally, here is the mysterious image.

map

Now, in org-mode, I see the image in an overlay, and I can toggle it on and off. If I click on the image, it opens a browser to Google maps with a pin at the spot I took it. When I export it, it wraps the image in a <figure> tag, and puts a url in the caption to the map. If you click on it, and zoom out, you will see this is a picture of the Nāpali Coast on Kauai in Hawaii, and I was in fact out at sea when I took the picture. It was spectacular. Here is another one. This one is a little more obvious with the zoom. Here, I was on land. Since this link is bracketed, it does not show the overlay however in the org-file.

Another vacation picture, this time with a caption. map

Overall, this was easier than I expected. It might be faster to outsource reading the exif data to some dedicate library, perhaps in python that would return everything you want in an easy to parse json data structure. The speed of computing the url is only annoying when you export or click on the links though.

I didn't build in any error handling, e.g. if you do this on a photo with no GPS data it will probably not handle it gracefully. I also haven't tested this on any other images, e.g. south of the equator, from other cameras, etc. I assume this exif data is pretty standard, but it is a wild world out there… It would still be nice to find a way to get a string representing the nearest known location somehow, that would help the caption be more useful.

There is one little footnote to speak of, and that is I had to do a little hackery to get this to work with my blog machinery. You can see what it is in the org-source, I buried it in a noexport subheading, because it isn't that interesting in the grand scheme of things. It was just necessary because I export these org-files to blogofile, which then builds the html pages, instead of just exporting them. The images have to be copied to a source directory, and paths changed in the html to point to them. See, boring stuff. Otherwise, the code above should be fine for regular org and html files! Now, my vacation is over so it is time to get back to work.

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Discuss on Twitter

Literate programming with python doctests

| categories: orgmode, noweb, python | tags:

On the org-mode mailing list we had a nice discussion about using noweb and org-mode in literate programming. The results of that discussion were blogged about here. I thought of a different application of this for making doctests in Python functions. I have to confess I have never liked these because I have always thought they were a pain to write since you basically have to put code and results into a docstring. The ideas developed in the discussion above led me to think of a new way to write these that seems totally reasonable.

The idea is just to put noweb placeholders in the function docstring for the doctests. The placeholders will be expanded when you tangle the file, and they will get their contents from other src-blocks where you have written and run examples to test them.

This video might make the rest of this post easier to follow:

I will illustrate the idea using org-mode and the ob-ipython I have in scimax. The defaults of my ob-ipython setup are not useful for this example because it puts the execution count and mime types of output in the output. These are not observed in a REPL, and so we turn this off by setting these variables.

(setq ob-ipython-suppress-execution-count t
      ob-ipython-show-mime-types nil)

Now, we make an example function that takes a single argument and returns one divided by that argument. This block is runnable, and the function is then defined in the jupyter kernel. The docstring contains several noweb references to doctest blocks we define later. For now, they don't do anything. See The noweb doctest block section for the block that is used to expand these. This block also has a tangle header which indicates the file to tangle the results to. When I run this block, it is sent to a Jupyter kernel and saved in memory for use in subsequent blocks.

Here is the block with no noweb expansion. Note that this is easier to read in the original org source than it is to read in the published blog format.

def func(a):
    """A function to divide one by a.

    <<doctest("doctest-1")>>

    <<doctest("doctest-2")>>

    <<doctest("doctest-3")>>

    Returns: 1 / a.
    """
    return 1 / a

Now, we can write a series of named blocks that define various tests we might want to use as doctests. You can run these blocks here, and verify they are correct. Later, when we tangle the document, these will be incorporated into the tangled file in the docstring we defined above.

func(5) == 0.2
True

This next test will raise an Exception, and we just run it to make sure it does.

func(0)

ZeroDivisionErrorTraceback (most recent call last)
<ipython-input-6-ba0cd5a88f0a> in <module>()
----> 1 func(0)

<ipython-input-1-eafd354a3163> in func(a)
     18     Returns: 1 / a.
     19     """
---> 20     return 1 / a

ZeroDivisionError: division by zero

This is just a doctest with indentation to show how it is used.

for i in range(1, 4):
    print(func(i))
1.0
0.5
0.3333333333333333


That concludes the examples I want incorporated into the doctests. Each one of these blocks has a name, which is used as an argument to the noweb references in the function docstring.

1 Add a way to run the tests

This is a common idiom to enable easy running of the doctests. This will get tangled out to the file.

if __name__ == "__main__":
    import doctest
    doctest.testmod()

2 Tangle the file

So far, the Python code we have written only exists in the org-file, and in memory. Tangling is the extraction of the code into a code file.

We run this command, which extracts the code blocks marked for tangling, and expands the noweb references in them.

(org-babel-tangle)
test.py

Here is what we get:

def func(a):
    """A function to divide one by a.

    >>> func(5) == 0.2
    True

    >>> func(0)
    Traceback (most recent call last):
    ZeroDivisionError: division by zero

    >>> for i in range(1, 4):
    ...     print(func(i))
    1.0
    0.5
    0.3333333333333333


    Returns: 1 / a.
    """
    return 1 / a

if __name__ == "__main__":
    import doctest
    doctest.testmod()

That looks like a reasonable python file. You can see the doctest blocks have been inserted into the docstring, as desired. The proof of course is that we can run these doctests, and use the python module. We show that next.

3 Run the tests

Now, we can check if the tests pass in a fresh run (i.e. not using the version stored in the jupyter kernel.) The standard way to run the doctests is like this:

python test.py -v

Well, that's it! It worked fine. Now we have a python file we can import and reuse, with some doctests that show how it works. For example, here it is in a small Python script.

from test import func
print(func(3))
0.3333333333333333

There are surely some caveats to keep in mind here. This was just a simple proof of concept idea that isn't tested beyond this example. I don't know how many complexities would arise from more complex doctests. But, it seems like a good idea to continue pursuing if you like using doctests, and like using org-mode and interactive/literate programming techniques.

It is definitely an interesting way to use noweb to build up better code files in my opinion.

4 The noweb doctest block

These blocks are used in the noweb expansions. Each block takes a variable which is the name of a block. This block grabs the body of the named src block and formats it as if it was in a REPL.

We also grab the results of the named block and format it for the doctest. We use a heuristic to detect Tracebacks and modify the output to be consistent with it. In that case we assume the relevant Traceback is on the last line.

Admittedly, this does some fragile feeling things, like trimming whitespace here and there to remove blank lines, and quoting quotes (which was not actually used in this example), and removing the ": " pieces of ob-ipython results. Probably other ways of running the src-blocks would not be that suitable for this.

(org-babel-goto-named-src-block name)
(let* ((src (s-trim-right (org-element-property :value (org-element-context))))
       (src-lines (split-string src "\n"))
       body result)
  (setq body
        (s-trim-right
         (s-concat ">>> " (car src-lines) "\n"
                   (s-join "\n" (mapcar (lambda (s)
                                          (concat "... " s))
                                        (cdr src-lines))))))
  ;; now the results
  (org-babel-goto-named-result name)
  (let ((result (org-element-context)))
    (setq result
          (thread-last
              (buffer-substring (org-element-property :contents-begin result)
                                (org-element-property :contents-end result))
            (s-trim)
            ;; remove ": " from beginning of lines
            (replace-regexp-in-string "^: *" "")
            ;; quote quotes
            (replace-regexp-in-string "\\\"" "\\\\\"")))
    (when (string-match "Traceback" result)
      (setq result (format
                    "Traceback (most recent call last):\n%s"
                    (car (last (split-string result "\n"))))))
    (concat body "\n" result)))

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Discuss on Twitter

Making it easier to extend the export of org-mode links with generic functions

| categories: orgmode, emacs | tags:

I am a big fan of org-mode links. Lately, I have had a need to modify how some links are exported, e.g. defining new exports for different backends, or fine-tuning a particular backend. This can be difficult, depending on how the link was set up. Here is a typical setup I am used to using, where the different options for the backends are handled in a conditional statement in a single function. I will just use a link that just serves to illustrate the issues here. These links are just sytactic sugar for markup, they don't do anything else. We start with an example that just converts text to italic text for different backends like html or latex.

(defun italic-link-export (path desc backend)
  (cond
   ((eq 'html backend)
    (format "<em>%s</em>" path))
   ((eq 'latex backend)
    (format "\\textit{%s}" path))
   ;; fall-through case for everything else
   (t
    path)))

(org-link-set-parameters "italic" :export 'italic-link-export)
:export italic-link-export
(org-export-string-as "italic:text" 'html t)
<p>
<em>text</em></p>

(org-export-string-as "italic:text" 'latex t)
\textit{text}

This falls through though to the default case.

(require 'ox-md)
(org-export-string-as "italic:text" 'md t)

# Table of Contents



text


The point I want to make here is that this is not easy to extend as a user. You have to either modify the italic-link-export function, advise it, or monkey-patch it. None of these are especially nice.

I could define italic-link-export in a way that it retrieves the function to use from an alist or hash-table using the backend, but then you have to do two things to modify the behavior: define a backend specific function and register it in the lookup variable. It is also possible to just look up a function by a derived symbol, e.g. using fboundp, and then using funcall to execute it. This looks something like this:

;; a user definable function for exporting to latex
(defun italic-link-export-latex (path desc backend)
  (format "\\textit{%s}" path))

;; generic export function that looks up functions or defaults to
(defun italic-link-exporter (path desc backend)
  "Run `italic-link-export-BACKEND' if it exists, or return path."
  (let ((func (intern-soft (format "italic-link-export-%s" backend))))
    (if (fboundp func)
        (funcall func path desc backend)
      path)))

This has some indirection, but allows you to just define new functions to add new export backends, or replace single backend exports. It isn't bad, but there is room for improvement.

In this comment in org-ref, I saw a new opportunity to address this issue using generic functions in elisp! The idea is to define a generic function that handles the general export case, and then define additional functions for each specific backend based on the signature of the export function. I will switch to bold markup for this.

(cl-defgeneric bold-link-export (path desc backend)
 "Generic function to export a bold link."
 path)

;; this one runs when the backend is equal to html
(cl-defmethod bold-link-export ((path t) (desc t) (backend (eql html)))
 (format "<b>%s</b>" path))

;; this one runs when the backend is equal to latex
(cl-defmethod bold-link-export ((path t) (desc t) (backend (eql latex)))
 (format "\\textit{%s}" path))

(org-link-set-parameters "bold" :export 'bold-link-export)
:export bold-link-export

Here it is in action:

(org-export-string-as "some bold:text" 'html t)
<p>
some <b>text</b></p>

(org-export-string-as "some bold:text" 'latex t)

This uses the generic function.

(require 'ox-md)
(org-export-string-as "some bold:text" 'md t)

# Table of Contents



some text


The syntax for defining the generic function is pretty similar to a regular function. The specific methods are a little different since they have to provide the specific "signature" that triggers each method. Here we only differentiate on the type of the backend. It is nice these are all separate functions though. It makes it trivial to add new ones, and less intrusive to replace in my opinion.

Generic functions have many other potential applications to replace functions that use lots of conditions to control flow, with a fall-through option at the end. You can learn more about them here: https://www.gnu.org/software/emacs/manual/html_node/elisp/Generic-Functions.html. There is a lot more to them than I have illustrated here.

Copyright (C) 2018 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.1.13

Discuss on Twitter
Next Page »