A simple emacs-lisp interface to CRUD operations in mongodb

| categories: emacslisp, database, mongodb, emacs | tags:

In this post I showed that MongoDB is pretty easy to use for simple database applications. I showed a way to get data out of the database that is native to Emacs, but to use Mongo in emacs applications comfortably, it would be really helpful to be able to create, read, update and delete (CRUD) entries. There is a minimal interface to mongodb for emacs-lisp here: https://github.com/m2ym/mongo-el. From what I can see, it seems limited to simple, single queries, and it is written with advanced features of emacs-lisp I do not understand enough to extend it. In the last post, I showed an easy way to use mongoexport to get data from a query out of a mongo database. Here I explore a similar approach to round out the CRUD (create, read, update and delete) operations for using emacs-lisp to work with mongodb. This will enable Emacs to easily use MongoDB in applications.

We use the mongo cli with the –eval option, which allows you to run commands on the database. The basic idea is to generate the json we need from a lisp data structure, and use that json in mongo commands as needed. This sounds simple, but below you will see there are plenty of corners to take care of.

The goal here is to get something that is pretty functional. It will not be able to support all the capabilities of MongoDB and the options available in the cli.

1 Inserting entries

Here we insert a document into the contacts collection of the contacts database. As in the Python example we considered earlier, this database is automatically created when we run this command.

(require 'json))
(let* ((json (json-encode '((first-name . "John")
                            (last-name . "Kitchin")
                            (email . "jkitchin@cmu.edu"))))
       (cmd (format "mongo 127.0.0.1/contacts --quiet --eval 'db.contacts.insert(%s)'"
                    json)))
  (shell-command-to-string cmd))
json

Here is a function we can use for inserting, and as you can see it works for multiple inserts too. There is a limit on how long the json string can be for this, so you cannot add too many entries at once with this. I do not know what the limit is, and suspect it is related to using a shell command. When this succeeds there is data returned about what happened, which we try to get in lisp form. Also, I noticed I had to do a little bit of escaping, especially for entries containing a single quote, which messes up the quoting on the shell command, and for non-ascii characters which the shell did not handle well. Maybe this could be avoided with a file-based approach, or if we used a pipe to a process.

(defun mongo-insert (db collection document)
  "Insert into DB.COLLECTION the DOCUMENT.
DOCUMENT will be some lisp structure that is converted to json."
  ;; we have to escape quote any single quotes. This came from
  ;; http://stackoverflow.com/questions/1250079/how-to-escape-single-quotes-within-single-quoted-strings
  (let* ((json (replace-regexp-in-string "'" "'\"'\"'" (json-encode document)))
         ;; it seems utf-8 characters may cause issues. Let's just remove them.
         (json (replace-regexp-in-string "[^[:ascii:]]" "" json))
         (cmd (format "mongo %s --quiet --eval 'db.%s.insert(%s)'"
                      db collection
                      json))
         (output (shell-command-to-string cmd)))
    (cond
     ((string-match "BulkWriteResult(" output)
      (json-read-from-string (substring output 16 -2)))
     ((string-match "WriteResult(" output)
      (json-read-from-string (substring output 12 -2)))
     (t
      output))))
mongo-insert

Here it is in action.

(mongo-insert "contacts" "contacts"
              '(((first-name . "John")
                 (last-name . "Kitchin")
                 (email . "jkitchin@cmu.edu"))
                ((first-name . "Someone")
                 (last-name . "Else")
                 ("email" . "someone@out.there"))))
((writeErrors . []) (writeConcernErrors . []) (nInserted . 2) (nUpserted . 0) (nMatched . 0) (nModified . 0) (nRemoved . 0) (upserted . []))

Seems like an ok way to get data from Emacs into a Mongo DB, and we get lisp data returned telling us what happened.

2 Finding a document

To update documents we need to find them. We would like to find a document by the _id, but we have a small dilemma. The json we need for that needs to look like: {"_id": ObjectId("587babfaef131d0d4603b3ad")}, where the ObjectId is not quoted. The json library does not seem to be able to do that. So, we have to modify our find code to do this. This is possible by manipulating the json string after it is generated with regular expression replacement. It feels hacky, and hopefully there are not many more examples of that. If there are, we will need another approach to generating the json data. Here is the modified find function, also with the projection option. Here is another place we have to tread somewhat lightly with the _id, in this case we have to requote it so that it can be read by emacs. It might make sense to just replace it with the quoted _id string, rather than the ObjectId call. Time will tell.

Here we create two helper functions to unquote input, and requote output. We also need some code to make an array of all the results, and put commas between all the results so that we end up with valid json in the output.

(defun mongo-unquote-query (query)
  "Json encodes QUERY, and unquotes any ObjectId calls.

We don't have syntax for the ObjectId call that mongo wants in
 lisp, so a query has to look like this:
'((_id .  \"ObjectId(\"587babfaef131d0d4603b3ad\")\"))

Mongo can't have the quotes around the call, so this function
removes them.
"
  (replace-regexp-in-string "\"\\(ObjectID(\\\\\"\\(.*?\\)\\\\\")\\)\""
                            "ObjectId(\"\\2\")"
                            (json-encode query)))

(defun mongo-requote-output (output)
  "Adds quotes around ObjectId in OUTPUT.
When mongo outputs json, it has unquoted ObjectIds in it that
emacs cannot interpret as json. "
  (replace-regexp-in-string
   "ObjectId(\"\\(.*?\\)\")"
   "\"ObjectId(\\\\\"\\1\\\\\")\""
   output))

(defun mongo-find (db collection query &optional projection)
  (let* ((query-json (mongo-unquote-query query))
         (projection-json
          (and projection (json-encode projection)))
         (output (mongo-requote-output
                  ;; add [] to make an array of output in json,
                  ;; and separate results by a comma
                  (concat "["
                          (replace-regexp-in-string
                           "\n" ""
                           (shell-command-to-string
                            (format "mongo %s --quiet --eval 'db.%s.find(%s).forEach(function(myDoc) { printjsononeline(myDoc); print( \",\"); })'"
                                    db collection
                                    (if projection
                                        (format "%s, %s" query-json projection-json)
                                      query-json))))
                          "]")))) 
    (json-read-from-string output)))
mongo-find

So, finally we can run something like this:

(mongo-find "contacts" "contacts" '((email . "someone@out.there")))
[((_id . "ObjectId(\"587c166cdfcd649d3acf99fd\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there")) ((_id . "ObjectId(\"587c16ad410565dd4c16c748\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there")) ((_id . "ObjectId(\"587c17550e586b4f8df21de0\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there")) ((_id . "ObjectId(\"587c1764d75279a55ffec483\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there")) ((_id . "ObjectId(\"587c17743281f1e9d5054396\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there")) ((_id . "ObjectId(\"587c178ad92706d2bd5a6e3c\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there")) ((_id . "ObjectId(\"587c1794756bb2bd0f0ac499\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there"))]

Here is an example usage with a projection that returns only the information you want, in this case, just the id.

(mongo-find "contacts" "contacts" '((email . "someone@out.there"))
            '((_id . 1)))
[((_id . "ObjectId(\"587c166cdfcd649d3acf99fd\")")) ((_id . "ObjectId(\"587c16ad410565dd4c16c748\")")) ((_id . "ObjectId(\"587c17550e586b4f8df21de0\")")) ((_id . "ObjectId(\"587c1764d75279a55ffec483\")")) ((_id . "ObjectId(\"587c17743281f1e9d5054396\")")) ((_id . "ObjectId(\"587c178ad92706d2bd5a6e3c\")")) ((_id . "ObjectId(\"587c1794756bb2bd0f0ac499\")"))]

3 Updating an entry

Ok, back to the update. To make sure that we update exactly the document we want, we will use the document _id. First, we define an update command.

(defun mongo-update (db collection query $set)
  "In DB.COLLECTION update records matching QUERY with the contents of $SET."
  (let* ((query-json (mongo-encode-query query))
         ($set-json (mongo-encode-query $set))
         (cmd (format "mongo %s --quiet --eval 'db.%s.update(%s, %s)'"
                      db collection
                      query-json $set-json))
         (output (shell-command-to-string cmd)))
    (if (string-match "WriteResult(" output)
        (json-read-from-string
         (substring output 12 -2))
      output)))
mongo-update

First a reminder of what is in this record.

(mongo-find "contacts" "contacts" '((_id . "ObjectId(\"587c16ad410565dd4c16c748\")")))
[((_id . "ObjectId(\"587c16ad410565dd4c16c748\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there"))]

Here we set the email field to a new address. Without $set, the whole document gets replaced.

(mongo-update "contacts" "contacts"
              '((_id . "ObjectId(\"587c16ad410565dd4c16c748\")"))
              '(($set . ((email . "someone@out.there.com")))))
((nMatched . 1) (nUpserted . 0) (nModified . 1))

Finally, let's see the document again to verify it is modified.

(mongo-find "contacts" "contacts" '((_id . "ObjectId(\"587c16ad410565dd4c16c748\")")))
[((_id . "ObjectId(\"587c16ad410565dd4c16c748\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there.com"))]

Looks good, you can see it got changed. There is a potential gotcha though. This next command looks like it should do the same thing, but it does not. The whole document gets replaced!

(mongo-update "contacts" "contacts"
              '((_id . "ObjectId(\"587c16ad410565dd4c16c748\")"))
              '((email . "someone@out.there.com")))
((nMatched . 1) (nUpserted . 0) (nModified . 1))
(mongo-find "contacts" "contacts" '((_id . "ObjectId(\"587c16ad410565dd4c16c748\")")))
[((_id . "ObjectId(\"587c16ad410565dd4c16c748\")") (email . "someone@out.there.com"))]

Do not forget the $set operator if you just want to update some fields!

4 Deleting a document

Next, let's get a delete function. I will only implement the deleteMany function here since you can give it a document id to delete only one, and usually I would want to delete all documents that meet a criteria anyway.

(defun mongo-deleteMany (db collection filter)
  "Delete records in DB.COLLECTION matched by FILTER.
TODO: add write concern."
  (let* ((filter-json (mongo-encode-query filter)) 
         (cmd (format "mongo %s --quiet --eval 'db.%s.deleteMany(%s)'"
                      db collection
                      filter-json))
         (output (shell-command-to-string cmd)))
    (json-read-from-string output)))
mongo-deleteMany

Since we borked that last document, let's just delete it.

(mongo-deleteMany "contacts" "contacts" '((_id . "ObjectId(\"587be3fa6009a569a277b680\")")))
((acknowledged . t) (deletedCount . 0))

5 Generic commands

We may want some flexibility to run collection commands. The most generic command will simply be to write the shell-command completely. We can keep a little syntax by encapsulating most of the boilerplate though. Here is a function for that.

(defun mongo-cmd (db collection cmd &rest args)
  "In DB.COLLECTION run CMD. 
ARGS if present will be used to format CMD."
  (shell-command-to-string
   (format "mongo %s --quiet --eval 'db.%s.%s'"
           db collection
           (apply #'format cmd args))))
mongo-cmd

We can get the number of documents with this:

(mongo-cmd "contacts" "contacts" "count()")
4341

Or run a more sophisticated command with arguments like this.

(mongo-cmd "contacts" "contacts" "explain().remove(%s)" (json-encode '(("category" . "enemy"))))
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "contacts.contacts",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"category" : {
				"$eq" : "enemy"
			}
		},
		"winningPlan" : {
			"stage" : "DELETE",
			"inputStage" : {
				"stage" : "COLLSCAN",
				"filter" : {
					"category" : {
						"$eq" : "enemy"
					}
				},
				"direction" : "forward"
			}
		},
		"rejectedPlans" : [ ]
	},
	"serverInfo" : {
		"host" : "Johns-MacBook-Air.local",
		"port" : 27017,
		"version" : "3.4.1",
		"gitVersion" : "5e103c4f5583e2566a45d740225dc250baacfbd7"
	},
	"ok" : 1
}

Or, drop the collection with:

(mongo-cmd "contacts" "contacts" "drop()")
true

All gone! Note, we do not try to handle the output of any of those, and they are returned as strings.

6 A MongoDB contacts database

Now, let's re-populate it for real. I store my contacts in a variable called "contacts" as a list of a descriptive string and then cons cells. These are actually harvested from a set of org-files. It is way to slow to parse these files each time, so I keep the contacts cached in memory and only update them if a file changes.

(length contacts)
6047

There are over 6000 contacts. Let's put them in a MongoDB.

Here is a limitation of our approach. This will not work because the generated shell command ends up being too long for the shell.

(mongo-insert "contacts" "contacts"
              (loop for contact in contacts
                    collect
                    (append `((desc . ,(car contact))) (cdr contact))))

So, we do them one at time here:

(let ((ct (current-time)))
  (loop for contact in contacts
        do
        (let ((output (mongo-insert "contacts" "contacts"
                                    (append `((desc . ,(car contact))) (cdr contact)))))
          (unless (= 1 (cdr (assoc 'nInserted output)))
            (warn "error: %S for %S" (cdr (assoc 'nInserted output)) contact))))
  (message "Elapsed time %.02f seconds" (float-time (time-since ct))))
Elapsed time 762.95 seconds

That took a little over 10 minutes to add. That seems long to me. This next step confirms that they were added.

(mongo-cmd "contacts" "contacts" "count()")
6047

Next we will compare some timing of finding data in the database vs looping through the cached contacts. Here is a timing macro to measure how long it takes to run a bit of code.

;; http://stackoverflow.com/questions/23622296/emacs-timing-execution-of-function-calls-in-emacs-lisp
(defmacro measure-time (&rest body)
  "Measure the time it takes to evaluate BODY."
  `(let ((time (current-time)))
     ,@body
     (message "%.06f seconds elapsed" (float-time (time-since time)))))
measure-time

Here is the old way I would extract data. Many contacts I have are academics, and I have stored their academic ranks in each contact.

(loop for contact in contacts
      if (string= "Professor" (cdr (assoc "RANK" (cdr contact))))
      collect contact into professors
      if (string= "Associate Professor" (cdr (assoc "RANK" (cdr contact))))
      collect contact into associate-professors
      if (string= "Assistant Professor" (cdr (assoc "RANK" (cdr contact))))
      collect contact into assistant-professors
      finally return `(("Assistant Professor" ,(length assistant-professors))
                       ("Associate Professor" ,(length associate-professors))
                       ("Professor" ,(length professors))))
Assistant Professor 313
Associate Professor 283
Professor 879

How long did it take to do that?

(measure-time
 (loop for contact in contacts
       if (string= "Professor" (cdr (assoc "RANK" (cdr contact))))
       collect contact into professors
       if (string= "Associate Professor" (cdr (assoc "RANK" (cdr contact))))
       collect contact into associate-professors
       if (string= "Assistant Professor" (cdr (assoc "RANK" (cdr contact))))
       collect contact into assistant-professors
       finally return (list (length assistant-professors)
                            (length associate-professors)
                            (length professors))))
0.008772 seconds elapsed

Not long at all! Comparatively, it is very slow to get this information out of the mongodb, although considerably less code is required. That might not be surprising, considering the json parsing that has to get done here.

Here is the equivalent code to extract that data from the database.

(loop for rank in '("Assistant Professor" "Associate Professor" "Professor")
       collect (list rank (length (mongo-find "contacts" "contacts"
                                              `((RANK . ,rank))))))
Assistant Professor 313
Associate Professor 283
Professor 879

It is comparatively slow to do this. This requires three json parses, and profiling indicates that alot of the work is done in parsing the json.

(measure-time
 (loop for rank in '("Assistant Professor" "Associate Professor" "Professor")
       collect (list rank (length (mongo-find "contacts" "contacts"
                                              `((RANK . ,rank)))))))
1.914817 seconds elapsed

Here is smarter way to do it that avoids the json parsing.

(loop for rank in '("Assistant Professor" "Associate Professor" "Professor")
      collect (list rank (mongo-cmd "contacts" "contacts" "count(%s)"
                                    (json-encode `((RANK . ,rank))))))
Assistant Professor 313
Associate Professor 283
Professor 879

And you can see here it is about 10 times faster, but not nearly as fast as running the lisp code on the cache.

(measure-time
 (loop for rank in '("Assistant Professor" "Associate Professor" "Professor")
       collect (list rank (mongo-cmd "contacts" "contacts" "count(%s)"
                                     (json-encode `((RANK . ,rank)))))))
0.349413 seconds elapsed

This is how you might integrate this into a completion command:

(ivy-read "choose: "
          (loop for c across (mongo-find "contacts" "contacts" "")
                collect
                (list (cdr (assoc 'desc c)) c)))

This is basically unusable though, because it takes so long to generate the candidates (over six seconds).

(measure-time
 (loop for c across (mongo-find "contacts" "contacts" "")
       collect
       (list (cdr (assoc 'desc c)) c)))
6.228225 seconds elapsed

We can get back to usable by making the database do more work for us. Here, we simply make the database print a list of cons cells that we can read into lisp. We have to use a javascript function, with some escaping and quoting. The escaping was necessary because there is some bad data in the email field that messed up the cons cells, e.g. some things like "name" <email> with nested single and double quoting, etc., and the quoting was necessary to get cons cells of the form ("desc" . "email"), and finally we wrap them in parentheses and read back the list of cons cells. At about a quarter of a second, this is very usable to get a list of over 6000 candidates. It is still many times slower than working on the contacts list in memory though. I am not a super fan of the one-line javascript, and if it was much more complicated than this another strategy would probably be desirable.

(measure-time
 (read
  (concat
   "("
   (shell-command-to-string "mongo contacts --quiet --eval 'db.contacts.find().forEach(function (doc) {print(\"(\\\"\" + doc.desc + \"\\\" . \\\"\" + escape(doc.EMAIL) +\"\\\")\");})'")
   ")")))
0.284730 seconds elapsed

7 Text searching

Finally, let us make a text index to make searching easy. This allows us a very flexible search where we do not have to specify what field or use regular expressions. We setup the index on all the fields, so we can find entries that match even on fields that do not exist in all documents.

(mongo-cmd "contacts" "contacts" "createIndex(%s)" (json-encode '(("$**" . "text"))))
{
	"createdCollectionAutomatically" : false,
	"numIndexesBefore" : 1,
	"numIndexesAfter" : 2,
	"ok" : 1
}

Now, let's use that to find the GOOGLE-SCHOLAR url of contacts matching the following query.

(mongo-find "contacts" "contacts" '(($text . (($search . "\"Carnegie Mellon\"")))
                                    ($text . (($search . "\"John Kitchin\""))))
            '((GOOGLE-SCHOLAR . 1) (_id . 0)))
[((GOOGLE-SCHOLAR . "https://scholar.google.com/citations?hl=en&user=jD_4h7sAAAAJ"))
 nil nil]

So, you can see there were three contacts, and one of them lists my google-scholar url.

8 Summary

This looks like the foundation of a mongo/emacs-lisp interface. This interface is not that fast though, and suffers from some limitations related to the use of the shell. Depending on the actual use, it is clear you can gain performance by passing some work on the database, which requires some javascript coding. Even that revealed some subtlety, e.g. making sure the database output text that was compatible with the lisp reader. That mostly means taking care of quotes, and other special characters, which I managed with a simple escape mechanism. It is probably worth investing a few more days in building an interface that uses a process and communicates with the mongo cli directly before moving forward with any significant application that uses Mongo in emacs. There are many good ideas for that:

  1. Index all your org files (e.g. http://kitchingroup.cheme.cmu.edu/blog/2017/01/03/Find-stuff-in-org-mode-anywhere/)
  2. Index all your bibtex files (e.g. http://kitchingroup.cheme.cmu.edu/blog/2017/01/15/Querying-a-MongoDB-bibtex-database-with-Python-and-emacs-lisp/)
  3. Download RSS feeds into a searchable database
  4. Manage your contacts
  5. Index your email? mu and notmuch use xapian for this, but I have found they cannot search for things like hashtags. Maybe MongoDB would be better?

The tradeoffs between this and sqlite are more clear now. With Mongo we do not have to create the normalized tables (although it is still a good idea to think about how to structure the documents, and if they should be a little normalized). It is much easier to map lisp data structures to Mongo queries than it is to do that with SQL queries. On the other hand, it is necessary to do some javascript programming with Mongo to get some desired output. It still seems worth exploring further.

Copyright (C) 2017 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.0.3

Discuss on Twitter

Querying a MongoDB bibtex database with Python and emacs-lisp

| categories: database, python, mongodb, emacs | tags:

I have been exploring using databases to help with searching my data. In this post we explore using MongoDB for bibtex entries. I am choosing bibtex entries because it is easy to parse bibtex files, I already have a lot of them, and I have several kinds of queries I regularly use. So, they are a good candidate to test out a new database on!

MongoDB is a noSQL database that is pretty easy to use. I installed it from homebrew, and then followed the directions to run the server.

With pymongo you can make a database as easy as this:

import bibtexparser

# Read the bibtex file to get entries
with open('../../../Dropbox/bibliography/references.bib', 'r') as bibfile:
    bp = bibtexparser.load(bibfile)
    entries = bp.entries

print("N = ", len(entries))

print(entries[0])

import pymongo
from pymongo import MongoClient
client = MongoClient('localhost', 27017)

# This creates the "entries" collection
db = client['bibtex'].entries

# add each entry
for entry in entries:
    db.insert_one(entry)

N = 1671 {'keyword': 'test, word', 'year': '2006', 'publisher': 'American Chemical Society (ACS)', 'title': 'The ACS Style Guide', 'ENTRYTYPE': 'book', 'editor': 'Janet S. Dodd', 'address': 'Washington, D.C.', 'ID': '2006-acs-style-guide', 'doi': '10.1021/bk-2006-styg', 'link': 'https://doi.org/10.1021/bk-2006-STYG', 'date_added': 'Wed Apr 1 10:17:54 2015', 'pages': 'nil'}

That was easy. We have a database with 1671 documents in it, and each document is essentially a dictionary of key-value pairs. You might even argue it was too easy. I didn't specify any structure to the entries at all. No required fields, no validation that the keys are spelled correctly, no validation on the values, e.g. you can see the year looks like a string. The benefit of that is that every entry went in, with no issues. On the other hand, the authors went in as a single string, as did the keywords, which affects our ability to search a little bit later. Note if you run that twice, it will add each entry again, since we do not check if the entry already exists.

A database is only useful though if it is easy to get stuff out of it. So, let's consider some test queries. First we find entries that have years less than 1950. The query is basically a little json bundle that describes a field and condition that we want to match. Here we use a less than operator, ""$lt"The results come back as a list of dictionaries. This is in stark contrast to a SQL query which is an expression in its own declarative language. A query here is a chunk of data that must get converted to code by the server. I am not 100% clear if the less than here is in the string sense or numeric sense, but for years it probably does not matter for a long time.

import pymongo
from pymongo import MongoClient
client = MongoClient('localhost', 27017)

db = client['bibtex'].entries

for i, result in enumerate(db.find({"year" : {"$lt": "1950"}})):
    print('{i: 2d}. {author}, {title}, {journal}, {year}.'.format(i=i+1, **result))
  1. Birch, Francis, Finite Elastic Strain of Cubic Crystals, Phys. Rev., 1947.
  2. Ditchburn, R. W. and Gilmour, J. C., The Vapor Pressures of Monatomic Vapors, Rev. Mod. Phys., 1941.
  3. J. Korringa, On the Calculation of the Energy of a Bloch Wave in a Metal, Physica, 1947.
  4. Nix, F. C. and MacNair, D., The Thermal Expansion of Pure Metals. {II}: Molybdenum, Palladium, Silver, Tantalum, Tungsten, Platinum, and Lead, Phys. Rev., 1942.

That seems easy enough, and those strings could easily be used as candidates for a selection tool like helm.

How about articles published by myself and my student Jacob Boes? This requires "and" logic. Apparently that is the default, so we just add three queries. One is an exact match on articles, and the other two are case-insensitive regular expression matches. I guess this has to be done on every document, since there probably is no way to index a regex match! This search was very fast, but it is not clear how fast it would be for a million entries. This matching is necessary because we stored all authors in a single field rather than splitting them into an array. We might still have to match strings for this even in an array since an author might then be "John R. Kitchin", rather than further decomposed into first and last names.

import pymongo
from pymongo import MongoClient
client = MongoClient('localhost', 27017)

db = client['bibtex']
entries = db['entries']

for i, result in enumerate(entries.find({"ENTRYTYPE": "article",
                                         "author" : {"$regex": "kitchin", '$options' : 'i'},
                                         "author" : {"$regex": "boes", '$options' : 'i'}})):
    if result.get('doi', None):
        result['doi'] = 'https://doi.org/{doi}'.format(doi=result['doi'])
    else:
        result['doi'] = ''
    print('{i: 2d}. {author}, {title}, {journal}, {year}. {doi}'.format(i=i+1, **result).replace("\n", ""))
  1. Jacob R. Boes and Peter Kondratyuk and Chunrong Yin and JamesB. Miller and Andrew J. Gellman and John R. Kitchin, Core Level Shifts in {Cu-Pd} Alloys As a Function of BulkComposition and Structure, Surface Science, 2015. https://doi.org/10.1016/j.susc.2015.02.011
  2. Jacob R. Boes and Gamze Gumuslu and James B. Miller and AndrewJ. Gellman and John R. Kitchin, Estimating Bulk-Composition-Dependent \ce{H2} AdsorptionEnergies on \ce{CuxPd1-x} Alloy (111) Surfaces, ACS Catalysis, 2015. https://doi.org/10.1021/cs501585k
  3. Jacob R. Boes and Gamze Gumuslu and James B. Miller and AndrewJ. Gellman and John R. Kitchin, Supporting Information: Estimating Bulk-Composition-Dependent\ce{H2} Adsorption Energies on \ce{CuxPd1-x} Alloy (111)Surfaces, ACS Catalysis, 2015. https://doi.org/10.1021/cs501585k
  4. G. Gumuslu and P. Kondratyuk and J. R. Boes and B. Morrealeand J. B. Miller and J. R. Kitchin and A. J. Gellman, Correlation of Electronic Structure With Catalytic Activity:\ce{H2}-\ce{D2} Exchange Across \ce{CuxPd1-x}Composition Space, ACS Catalysis, 2015. https://doi.org/10.1021/cs501586t
  5. John D. Michael and Ethan L. Demeter and Steven M. Illes andQingqi Fan and Jacob R. Boes and John R. Kitchin, Alkaline Electrolyte and {Fe} Impurity Effects on thePerformance and Active-Phase Structure of {NiOOH} Thin Filmsfor {OER} Catalysis Applications, J. Phys. Chem. C, 2015. https://doi.org/10.1021/acs.jpcc.5b02458
  6. Jacob R. Boes and Mitchell C. Groenenboom and John A. Keithand John R. Kitchin, Neural Network and {Reaxff} Comparison for {Au} Properties, Int. J. Quantum Chem., 2016. https://doi.org/10.1002/qua.25115
  7. Jacob R. Boes and John R. Kitchin, Neural Network Predictions of Oxygen Interactions on a Dynamic Pd Surface, Molecular Simulation, Accepted 12/2016. https://doi.org/10.1080/08927022.2016.1274984
  8. Jacob R. Boes and John R. Kitchin, Modeling Segregation on {AuPd}(111) Surfaces With DensityFunctional Theory and Monte Carlo Simulations, Submitted to J. Phys. Chem. C, 2016.

We can find out how many different entry types we have, as well as how many distinct keyword entries there are. The documents do not separate the keywords though, so this is just the unique strings of comma-separated keywords values. We would have had to split those in advance to have a list of keywords to search for a specific one beyond string matching. Curiously, in my bibtex entries, these are in a field called "keywords". It appears the bibtex parser may have changed the name to "keyword".

import pymongo
from pymongo import MongoClient
client = MongoClient('localhost', 27017)

db = client['bibtex']
entries = db['entries']

print(entries.distinct("ENTRYTYPE"))
print(len(entries.distinct("keyword")))
print(entries.find({"keyword": {"$exists": "true"}})[22]['keyword'])

['book', 'article', 'techreport', 'phdthesis', 'inproceedings', 'inbook', 'mastersthesis', 'misc', 'incollection'] 176 Bildungsw{\"a}rmen, Dichtefunktionalrechnungen, Perowskite, Thermochemie

1 text searching

You can do text search as well. You first have to create an index on one or more fields, and then use the $text and $search operators. Here I made an index on a few fields, and then searched on it. Note that you can only have one text index, so think about it in advance! This simplifies the query a bit, we do not have to use the regex syntax for matching on a field.

import pymongo
from pymongo import MongoClient
client = MongoClient('localhost', 27017)

db = client['bibtex']
entries = db['entries']

entries.create_index([('author', pymongo.TEXT),
                      ('title', pymongo.TEXT),
                      ('keyword', pymongo.TEXT)], sparse=True)

for i, result in enumerate(entries.find({"$text" : {"$search": "kitchin", "$search": "boes"}})):
    print('{i: 2d}. {author}, {title}, {journal}, {year}.'.format(i=i, **result).replace("\n", ""))
  1. G. Gumuslu and P. Kondratyuk and J. R. Boes and B. Morrealeand J. B. Miller and J. R. Kitchin and A. J. Gellman, Correlation of Electronic Structure With Catalytic Activity:\ce{H2}-\ce{D2} Exchange Across \ce{CuxPd1-x}Composition Space, ACS Catalysis, 2015.
  2. Jacob R. Boes and Peter Kondratyuk and Chunrong Yin and JamesB. Miller and Andrew J. Gellman and John R. Kitchin, Core Level Shifts in {Cu-Pd} Alloys As a Function of BulkComposition and Structure, Surface Science, 2015.
  3. Jacob R. Boes and Gamze Gumuslu and James B. Miller and AndrewJ. Gellman and John R. Kitchin, Estimating Bulk-Composition-Dependent \ce{H2} AdsorptionEnergies on \ce{CuxPd1-x} Alloy (111) Surfaces, ACS Catalysis, 2015.
  4. Jacob R. Boes and John R. Kitchin, Neural Network Predictions of Oxygen Interactions on a Dynamic Pd Surface, Molecular Simulation, Accepted 12/2016.
  5. Jacob R. Boes and John R. Kitchin, Modeling Segregation on {AuPd}(111) Surfaces With DensityFunctional Theory and Monte Carlo Simulations, Submitted to J. Phys. Chem. C, 2016.
  6. Jacob R. Boes and Gamze Gumuslu and James B. Miller and AndrewJ. Gellman and John R. Kitchin, Supporting Information: Estimating Bulk-Composition-Dependent\ce{H2} Adsorption Energies on \ce{CuxPd1-x} Alloy (111)Surfaces, ACS Catalysis, 2015.
  7. John D. Michael and Ethan L. Demeter and Steven M. Illes andQingqi Fan and Jacob R. Boes and John R. Kitchin, Alkaline Electrolyte and {Fe} Impurity Effects on thePerformance and Active-Phase Structure of {NiOOH} Thin Filmsfor {OER} Catalysis Applications, J. Phys. Chem. C, 2015.
  8. Jacob R. Boes and Mitchell C. Groenenboom and John A. Keithand John R. Kitchin, Neural Network and {Reaxff} Comparison for {Au} Properties, Int. J. Quantum Chem., 2016.

We can use this to search for documents with orgmode in a keyword or title too.

import pymongo
from pymongo import MongoClient
client = MongoClient('localhost', 27017)

db = client['bibtex']
entries = db['entries']

entries.create_index([('author', pymongo.TEXT),
                      ('title', pymongo.TEXT),
                      ('keyword', pymongo.TEXT)], sparse=True)

for i, result in enumerate(entries.find({"$text" : {"$search": "orgmode"}})):
    print('{i: 2d}. {author}, {title}, {journal}, {year}.'.format(i=i, **result).replace("\n", ""))
  1. John R. Kitchin, Data Sharing in Surface Science, Surface Science, 2016.
  2. Zhongnan Xu and John R. Kitchin, Probing the Coverage Dependence of Site and AdsorbateConfigurational Correlations on (111) Surfaces of LateTransition Metals, J. Phys. Chem. C, 2014.
  3. Xu, Zhongnan and Rossmeisl, Jan and Kitchin, John R., A Linear Response {DFT}+{U} Study of Trends in the OxygenEvolution Activity of Transition Metal Rutile Dioxides, The Journal of Physical Chemistry C, 2015.
  4. Prateek Mehta and Paul A. Salvador and John R. Kitchin, Identifying Potential \ce{BO2} Oxide Polymorphs for EpitaxialGrowth Candidates, ACS Appl. Mater. Interfaces, 2015.
  5. Xu, Zhongnan and Joshi, Yogesh V. and Raman, Sumathy andKitchin, John R., Accurate Electronic and Chemical Properties of 3d TransitionMetal Oxides Using a Calculated Linear Response {U} and a {DFT+ U(V)} Method, The Journal of Chemical Physics, 2015.
  6. Zhongnan Xu and John R. Kitchin, Relationships Between the Surface Electronic and ChemicalProperties of Doped 4d and 5d Late Transition Metal Dioxides, The Journal of Chemical Physics, 2015.
  7. Zhongnan Xu and John R Kitchin, Tuning Oxide Activity Through Modification of the Crystal andElectronic Structure: From Strain To Potential Polymorphs, Phys. Chem. Chem. Phys., 2015.
  8. Jacob R. Boes and Gamze Gumuslu and James B. Miller and AndrewJ. Gellman and John R. Kitchin, Supporting Information: Estimating Bulk-Composition-Dependent\ce{H2} Adsorption Energies on \ce{CuxPd1-x} Alloy (111)Surfaces, ACS Catalysis, 2015.
  9. Kitchin, John R., Examples of Effective Data Sharing in Scientific Publishing, ACS Catalysis, 2015.
  10. Curnan, Matthew T. and Kitchin, John R., Effects of Concentration, Crystal Structure, Magnetism, andElectronic Structure Method on First-Principles Oxygen VacancyFormation Energy Trends in Perovskites, The Journal of Physical Chemistry C, 2014.
  11. Kitchin, John R. and Van Gulick, Ana E. and Zilinski, Lisa D., Automating Data Sharing Through Authoring Tools, International Journal on Digital Libraries, 2016.
  12. Jacob R. Boes and Gamze Gumuslu and James B. Miller and AndrewJ. Gellman and John R. Kitchin, Estimating Bulk-Composition-Dependent \ce{H2} AdsorptionEnergies on \ce{CuxPd1-x} Alloy (111) Surfaces, ACS Catalysis, 2015.
  13. Zhongnan Xu and John R. Kitchin, Relating the Electronic Structure and Reactivity of the 3dTransition Metal Monoxide Surfaces, Catalysis Communications, 2014.
  14. Spencer D. Miller and Vladimir V. Pushkarev and AndrewJ. Gellman and John R. Kitchin, Simulating Temperature Programmed Desorption of Oxygen on{P}t(111) Using {DFT} Derived Coverage Dependent DesorptionBarriers, Topics in Catalysis, 2014.
  15. Hallenbeck, Alexander P. and Kitchin, John R., Effects of \ce{O_2} and \ce{SO_2} on the Capture Capacity of aPrimary-Amine Based Polymeric \ce{CO_2} Sorbent, Industrial \& Engineering Chemistry Research, 2013.

2 Querying from emacs-lisp

It is hard to get too excited about this if it is not easy to query from emacs and get data in a form we can use in emacs ;) The json library allows us to convert lisp data structures to json pretty easily. For example:

(require 'json)

(json-encode '((ENTRYTYPE . article)
               (author . (($regex . kitchin)
                          ($options . i)))
               (author . (($regex . boes)
                          ($options . i)))))
{"ENTRYTYPE":"article","author":{"$regex":"kitchin","$options":"i"},"author":{"$regex":"boes","$options":"i"}}

So, we can use an a-list syntax to build up the query. Then we can send it to mongo using mongoexport that will return a json string that we can read back into emacs to get lisp data. Here is an example that returns a query. We print the first element here.

(pp
 (aref (json-read-from-string
        (shell-command-to-string
         (format "mongoexport --quiet --jsonArray -d bibtex -c entries -q '%s'"
                 (json-encode '((ENTRYTYPE . article)
                                (author . (($regex . kitchin)
                                           ($options . i)))
                                (author . (($regex . boes)
                                           ($options . i))))))))
       0))
((_id
  ($oid . "5878d9644c114f59fe86cb36"))
 (author . "Jacob R. Boes and Peter Kondratyuk and Chunrong Yin and James\nB. Miller and Andrew J. Gellman and John R. Kitchin")
 (year . "2015")
 (title . "Core Level Shifts in {Cu-Pd} Alloys As a Function of Bulk\nComposition and Structure")
 (ENTRYTYPE . "article")
 (ID . "boes-2015-core-cu")
 (keyword . "DESC0004031, early-career")
 (volume . "640")
 (doi . "10.1016/j.susc.2015.02.011")
 (link . "https://doi.org/10.1016/j.susc.2015.02.011")
 (issn . "0039-6028")
 (journal . "Surface Science")
 (pages . "127-132"))

That is pretty sweet, we get a lisp data structure we can use. We can wrap that into a reasonable looking function here:

(defun mongo-find (db collection query)
  (json-read-from-string
   (shell-command-to-string
    (format "mongoexport --quiet --jsonArray -d %s -c %s -q '%s'"
            db collection (json-encode query)))))
mongo-find

Now we can use the function to query the database, and then format the results. Here we look at the example of articles with authors that match "kitchin" and "boes".

(loop for counter from 1 for entry across
      (mongo-find "bibtex" "entries" '((ENTRYTYPE . article)
                                       (author . (($regex . kitchin)
                                                  ($options . i)))
                                       (author . (($regex . boes)
                                                  ($options . i)))))
      do
      (setq entry (append `(,(cons "counter" counter)) entry))
      ;; make sure we have a doi field.
      (if (assoc 'doi entry)
          (push (cons "doi" (format "https://doi.org/%s" (cdr (assoc 'doi entry)))) entry)
        (push (cons "doi" "") entry))
      concat
      (concat (replace-regexp-in-string
               "\n" " "
               (s-format "${counter}. ${author}, ${title} (${year}). ${doi}"
                         'aget entry)) "\n"))
1. Jacob R. Boes and Peter Kondratyuk and Chunrong Yin and James B. Miller and Andrew J. Gellman and John R. Kitchin, Core Level Shifts in {Cu-Pd} Alloys As a Function of Bulk Composition and Structure (2015). https://doi.org/10.1016/j.susc.2015.02.011
2. Jacob R. Boes and Gamze Gumuslu and James B. Miller and Andrew J. Gellman and John R. Kitchin, Estimating Bulk-Composition-Dependent \ce{H2} Adsorption Energies on \ce{Cu_{x}Pd_{1-x}} Alloy (111) Surfaces (2015). https://doi.org/10.1021/cs501585k
3. Jacob R. Boes and Gamze Gumuslu and James B. Miller and Andrew J. Gellman and John R. Kitchin, Supporting Information: Estimating Bulk-Composition-Dependent \ce{H2} Adsorption Energies on \ce{Cu_{x}Pd_{1-x}} Alloy (111) Surfaces (2015). https://doi.org/10.1021/cs501585k
4. G. Gumuslu and P. Kondratyuk and J. R. Boes and B. Morreale and J. B. Miller and J. R. Kitchin and A. J. Gellman, Correlation of Electronic Structure With Catalytic Activity: \ce{H2}-\ce{D2} Exchange Across \ce{Cu_{x}Pd_{1-x}} Composition Space (2015). https://doi.org/10.1021/cs501586t
5. John D. Michael and Ethan L. Demeter and Steven M. Illes and Qingqi Fan and Jacob R. Boes and John R. Kitchin, Alkaline Electrolyte and {Fe} Impurity Effects on the Performance and Active-Phase Structure of {NiOOH} Thin Films for {OER} Catalysis Applications (2015). https://doi.org/10.1021/acs.jpcc.5b02458
6. Jacob R. Boes and Mitchell C. Groenenboom and John A. Keith and John R. Kitchin, Neural Network and {Reaxff} Comparison for {Au} Properties (2016). https://doi.org/10.1002/qua.25115
7. Jacob R. Boes and John R. Kitchin, Neural Network Predictions of Oxygen Interactions on a Dynamic Pd Surface (Accepted 12/2016). https://doi.org/10.1080/08927022.2016.1274984
8. Jacob R. Boes and John R. Kitchin, Modeling Segregation on {AuPd}(111) Surfaces With Density Functional Theory and Monte Carlo Simulations (2016).

Wow, that looks like a pretty lispy way to query the database and use the results. It is probably pretty easy to do similar things for inserting and updating documents. I will save that for another day.

3 Summary thoughts

This is not an exhaustive study of Mongo for a bibtex database. It does illustrate that it is potentially useful. Imagine a group of users can enter bibtex entries, and then share them through a central server. Or you query the server for entries and then select them using helm/ivy. That is probably faster than parsing large bibtex files (note, in org-ref I already cache the files in parsed form for performance reasons!).

It would make sense to split the authors, and keywords in another version of this database. It also could make sense to have a field that is the bibtex string, and to do text search on that string. That way you get everything in the entry for searching, and an easy way to generate bibtex files without having to reconstruct them.

It is especially interesting to run the queries through emacs-lisp since we get the benefit of editing lisp code while writing the query, e.g. parenthesis navigation, less quoting, etc… and we get back lisp data that can be used to construct helm/ivy queries, or other emacs things. That makes this look competitive with emacsql at least for the syntax. I predict that there will be more posts on this in the future.

Copyright (C) 2017 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.0.3

Discuss on Twitter

Scoring elfeed articles

| categories: elfeed, emacs | tags:

I use elfeed to read RSS feeds of scientific journals, python, emacs, and lisp blogs, and the emacs stackexchange feed. Here are the current feeds I follow.

(mapcar 'list elfeed-feeds)
(http://syndic8.scopus.com/getMessage?registrationId=ADEJAEEKHFESCDEOCDFOAHGSCDHSAKHREFMSADHNJA cmu)
http://feeds.feedburner.com/acs/accacs
http://feeds.feedburner.com/acs/enfuem
http://feeds.feedburner.com/acs/esthag
http://feeds.feedburner.com/acs/jacsat
http://feeds.feedburner.com/acs/jpcbfk
http://feeds.feedburner.com/acs/jpccck
http://feeds.feedburner.com/acs/jpclcd
http://feeds.feedburner.com/acs/cmatex
http://feeds.feedburner.com/acs/jctcce
http://feeds.feedburner.com/acs/jcisd8
http://feeds.feedburner.com/acs/iecred
http://feeds.aps.org/rss/recent/prl.xml
http://feeds.aps.org/rss/recent/prb.xml
http://www.sciencemag.org/rss/current.xml
http://feeds.nature.com/nature/rss/current
http://feeds.nature.com/nmat/rss/current
http://feeds.nature.com/nchem/rss/current
http://rss.sciencedirect.com/publication/science/09270256
http://onlinelibrary.wiley.com/rss/journal/10.1002/(ISSN)1521-3773
http://scitation.aip.org/rss/content/aip/journal/jcp/latestarticles;jsessionid=6k76xb11z253.x-aip-live-06?fmt=rss
(http://planetpython.org/rss20.xml python)
(http://planet.scipy.org/rss20.xml python)
(http://planet.emacsen.org/atom.xml emacs)
http://planet.lisp.org/rss20.xml
http://catalysis-preprint-archive.github.io/updates.rss
https://www.cmu.edu/policies/news/rss-feed.rss
(http://emacs.stackexchange.com/feeds emacs)

I get a lot of articles this way. The current size of the database is:

(elfeed-db-size)
79721

Elfeed tells me I have over 300 unread entries to review at the moment.

(elfeed-search--count-unread)
341/363:24

To deal with this deluge, I have done a couple of things. I set up some new key-bindings so I can alternate marking entries as read if the titles do not look interesting. These keybindings let me alternate fingers, so they do not get too tired (that really happens some days!).

;; help me alternate fingers in marking entries as read
(define-key elfeed-search-mode-map (kbd "f") 'elfeed-search-untag-all-unread)
(define-key elfeed-search-mode-map (kbd "j") 'elfeed-search-untag-all-unread)

I also set up some auto-tagging of the emacs and python feeds, and setup some custom faces so these tags are highlighted so they are easy to see. Anything highlighted in blue is related to emacs, green is related to python, and pink is related to my department, and I can type s, then the tag to see only those entries. Here is what my feed looks like:

Today I want to explore adding tags to entries to further prioritize them. There is a way to tag entries that is described here: https://github.com/skeeto/elfeed#tag-hooks where you can create patterns to match an entry feed title, url, title or link. Basically, you create a function that takes an entry, amd have it add or remove a tag conditionally.

I want to tag entries that meet certain criteria, for example keywords, and set a tag based on the number of matches. Ideally, one day this would be integrated with machine learning so it could rank entries by other entries I have liked, but today we setup code that will create a score for an entry based on the number of matches, and then tag it so that it will get highlighted for me. First, we define two custom faces and setup elfeed to use them. I will use two tags: important and relevant. relevant will be for entries that get a score of at least 1, and important for entries that get a score greater than 1.

(defface relevant-elfeed-entry
  `((t :background ,(color-lighten-name "orange1" 40)))
  "Marks a relevant Elfeed entry.")

(defface important-elfeed-entry
  `((t :background ,(color-lighten-name "OrangeRed2" 40)))
  "Marks an important Elfeed entry.")

(push '(relevant relevant-elfeed-entry)
      elfeed-search-face-alist)

(push '(important important-elfeed-entry)
      elfeed-search-face-alist)

In elfeed, each entry is a structure, and we can access the title and content for matching. Here is an example of a simple scoring function. The idea is just to match patterns, and then add to the score if it matches. This is not as advanced as gnus scoring, but it is a good starting point.

(defun score-elfeed-entry (entry)
  (let ((title (elfeed-entry-title entry))
        (content (elfeed-deref (elfeed-entry-content entry)))
        (score 0))
    (loop for (pattern n) in '(("alloy" 1)
                               ("machine learning\\|neural" 1)
                               ("database" 1)
                               ("reproducible" 1)
                               ("carbon dioxide\\|CO2" 1)
                               ("oxygen evolution\\|OER\\|electrolysis" 1)
                               ("perovskite\\|polymorph\\|epitax" 1)
                               ("kitchin" 2))
          if (string-match pattern title)
          do (incf score n)
          if (string-match pattern content)
          do (incf score n))
    (message "%s - %s" title score)

    ;; store score for later in case I ever integrate machine learning
    (setf (elfeed-meta entry :my/score) score)

    (cond
     ((= score 1)
      (elfeed-tag entry 'relevant))
     ((> score 1)
      (elfeed-tag entry 'important)))
    entry))

(add-hook 'elfeed-new-entry-hook 'score-elfeed-entry)
score-elfeed-entry

Now, new entries automatically get tagged with relevant or important, depending on the score that function gives them, and they get color-coded. Now, the feed looks like this:

I saved some bookmarks to see just the important or relevant ones (http://nullprogram.com/blog/2015/12/03/) so I can see new relevant entries with C-x r b and selecting the relevant bookmark. These work from anywhere in Emacs.

  @6-months-ago +unread +relev  @6-months-ago +unread +relevant
  elfeed @6-months-ago +unread  @6-months-ago +unread +important

I usually access elfeed from a command that shows me everything. Here, I define key-bindings to show me just the important or relevant ones. I could not see a way to get an or in there to show me both of them. These keys make it a one key press to show only these entries, and then get back to the full list.

(define-key elfeed-search-mode-map (kbd "i")
  (lambda () (interactive)
    (elfeed-search-set-filter "@6-months-ago +unread +important")))

(define-key elfeed-search-mode-map (kbd "v")
  (lambda () (interactive)
    (elfeed-search-set-filter "@6-months-ago +unread +relevant")))

(define-key elfeed-search-mode-map (kbd "c")
  (lambda () (interactive)
    (elfeed-search-set-filter "@6-months-ago +unread")))

That summarizes the experiment of the day. There is clearly some room for improvement on the scoring function, e.g. moving the patterns out of the function and into a customizable variable, making the patterns be specific to either the title or content, etc. I am going to try this for a few days and see if it is actually helpful first though.

Copyright (C) 2017 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.0.3

Discuss on Twitter

Find stuff in org-mode anywhere

| categories: database, orgmode, emacs | tags:

I use org-mode extensively. I write scientific papers, keep notes on meetings, write letters of recommendation, notes on scientific articles, keep TODO lists in projects, help files for software, write lecture notes, students send me homework solutions in it, it is a contact database, … Some files are on Dropbox, Google Drive, Box, some in git repos, etc. The problem is that leads to org-files everywhere on my hard drive. At this point I have several thousand org-files that span about five years of work.

It is not that easy after a while to find them. Yes there are things like recent-files, bookmarks, counsel-find-file, helm-for-files, counsel/helm-locate, helm/counsel-grep/ag/pt, projectile for searching within a project, a slew of tools to search open buffers, there is recoll, etc… There are desktop search tools, and of course, good organization habits. Over a five year time span though, these change, and I have yet to find a solution to finding what I want. What about a file I made a year ago that is not in the current directory or this project, and not in my org-agenda-files list? How do I get a dynamic todo list across all these files? Or find all the files that cite a particular bibtex entry, or that were authored by a particular student?

Previously, I indexed org files with Swish-e to make it easy to search them, with an ability to search just headlines, or paragraphs, etc. The problem with that is the nightly indexing was slow since I basically had to regenerate the database each time due to limitations in Swish-e. Finally I have gotten around to the next iteration of this idea, which is a better database. In this post, I explore using sqlite to store headlines and links in org-files.

The idea is that anytime I open or save any org file, it will be added/updated in the database. The database will store the headlines and its properties and content, as well as the location and properties of all links and file keywords. That means I should be able to efficiently query all org files I have ever visited to find TODO headlines, tagged headlines, different types of links, etc. Here we try it out and see if it is useful.

1 The database design

I used emacsql to create and interact with a sqlite3 database. It is a lispy way to generate SQL queries. I will not talk about the code much here, you can see this version org-db.el . The database design consists of several tables that contain the filenames, headlines, tags, properties, (optionally) headline-content, headline-tags, headline-properties, and links. The lisp code is a work in progress, and not something I use on a daily basis yet. This post is a proof of concept to see how well this approach works.

I use hooks to update the database when an org-file is opened (only if it is different than what is in the database based on an md5 hash) and when it is saved. Basically, these functions delete the current entries in the database for a file, then use regular expressions to go to each headline or link in the file, and add data back to the database. I found this to be faster than parsing the org-file with org-element especially for large files. Since this is all done by a hook, anytime I open an org-file anywhere it gets added/updated to the database. The performance of this is ok. This approach will not guarantee the database is 100% accurate all the time (e.g. if something modifies the file outside of emacs, like a git pull), but it doesn't need to be. Most of the files do not change often, the database gets updated each time you open a file, and you can always reindex the database from files it knows about. Time will tell how often that seems necessary.

emacsql lets you use lisp code to generate SQL that is sent to the database. Here is an example:

(emacsql-flatten-sql [:select [name] :from main:sqlite_master :where (= type table)])
SELECT name FROM main.sqlite_master WHERE type = "table";

There are some nuances, for example, main:sqlite_master gets converted to main.sqlite_master. You use vectors, keywords, and sexps to setup the command. emacsql will turn a name like filename-id into filename_id. It was not too difficulty to figure out, and the author of emacsql was really helpful on a few points. I will be referring to this post in the future to remember some of these nuances!

Here is a list of tables in the database. There are a few primary tables, and then some that store tags, properties, and keywords on the headlines. This is typical of emacsql code; it is a lisp expression that generates SQL. In this next expression org-db is a variable that stores the database connection created in org-db.el.

(emacsql org-db [:select [name] :from main:sqlite_master :where (= type table)])
files
tags
properties
keywords
headlines
headline_content
headline_content_content
headline_content_segments
headline_content_segdir
headline_content_docsize
headline_content_stat
headline_tags
headline_properties
file_keywords
links

Here is a description of the columns in the files table:

(emacsql org-db [:pragma (funcall table_info files)])
0 rowid INTEGER 0 nil 1
1 filename 0 nil 0  
2 md5 0 nil 0  

and the headlines table.

(emacsql org-db [:pragma (funcall table_info headlines)])
0 rowid INTEGER 0 nil 1
1 filename_id 0 nil 0  
2 title 0 nil 0  
3 level 0 nil 0  
4 todo_keyword 0 nil 0  
5 todo_type 0 nil 0  
6 archivedp 0 nil 0  
7 commentedp 0 nil 0  
8 footnote_section_p 0 nil 0  
9 begin 0 nil 0  

Tags and properties on a headline are stored in headline-tags and headline-properties.

The database is not large if all it has is headlines and links (no content). It got up to half a GB with content, and seemed a little slow, so for this post I leave the content out.

du -hs ~/org-db/org-db.sqlite
56M /Users/jkitchin/org-db/org-db.sqlite

Here we count how many files are in the database. These are just the org-files in my Dropbox folder. There are a lot of them! If I include all the org-files from my research and teaching projects this number grows to about 10,000! You do not want to run org-map-entries on that. Note this also includes all of the org_archive files.

(emacsql org-db [:select (funcall count) :from files])
1569

Here is the headlines count. You can see there is no chance of remembering where these are because there are so many!

(emacsql org-db [:select (funcall count) :from headlines])
38587

And the links. So many links!

(emacsql org-db [:select (funcall count) :from links])
303739

That is a surprising number of links.

2 Querying the link table

Let's see how many are cite links from org-ref there are.

(emacsql org-db [:select (funcall count) :from links :where (= type "cite")])
14766

Wow, I find that to also be surprisingly large! I make a living writing proposals and scientific papers, and I wrote org-ref to make that easier, so maybe it should not be so surprising. We can search the link database for files containing citations of "kitchin-2015-examp" like this. The links table only stores the filename-id, so we join it with the files table to get useful information. Here we show the list of files that contain a citation of that reference. It is a mix of manuscripts, proposals, presentations, documentation files and notes.

(emacsql org-db [:select :distinct [files:filename]
                 :from links :inner :join files :on (= links:filename-id files:rowid) 
                 :where (and (= type "cite") (like path "%kitchin-2015-examp%"))])
/Users/jkitchin/Dropbox/CMU/manuscripts/2015/Research_Data_Publishing_Paper/manuscript.org
/Users/jkitchin/Dropbox/CMU/manuscripts/2015/Research_Data_Publishing_Paper/manuscript-2015-06-29/manuscript.org
/Users/jkitchin/Dropbox/CMU/manuscripts/2015/Research_Data_Publishing_Paper/manuscript-2015-10-10/manuscript.org
/Users/jkitchin/Dropbox/CMU/manuscripts/2015/Research_Data_Publishing_Paper/manuscript-2016-03-09/manuscript.org
/Users/jkitchin/Dropbox/CMU/manuscripts/2015/Research_Data_Publishing_Paper/manuscript-2016-04-18/manuscript.org
/Users/jkitchin/Dropbox/CMU/manuscripts/2015/human-readable-data/manuscript.org
/Users/jkitchin/Dropbox/CMU/manuscripts/@archive/2015/Research_Data_Publishing_Paper/manuscript.org
/Users/jkitchin/Dropbox/CMU/manuscripts/@archive/2015/Research_Data_Publishing_Paper/manuscript-2015-06-29/manuscript.org
/Users/jkitchin/Dropbox/CMU/manuscripts/@archive/2015/Research_Data_Publishing_Paper/manuscript-2015-10-10/manuscript.org
/Users/jkitchin/Dropbox/CMU/manuscripts/@archive/2015/Research_Data_Publishing_Paper/manuscript-2016-03-09/manuscript.org
/Users/jkitchin/Dropbox/CMU/manuscripts/@archive/2015/Research_Data_Publishing_Paper/manuscript-2016-04-18/manuscript.org
/Users/jkitchin/Dropbox/CMU/manuscripts/@archive/2015/human-readable-data/manuscript.org
/Users/jkitchin/Dropbox/CMU/meetings/@archive/2015/BES-2015/doe-bes-wed-data-briefing/doe-bes-wed-data-sharing.org
/Users/jkitchin/Dropbox/CMU/meetings/@archive/2015/NIST-july-2015/data-sharing.org
/Users/jkitchin/Dropbox/CMU/meetings/@archive/2015/UD-webinar/ud-webinar.org
/Users/jkitchin/Dropbox/CMU/meetings/@archive/2016/AICHE/data-sharing/data-sharing.org
/Users/jkitchin/Dropbox/CMU/meetings/@archive/2016/Spring-ACS/data-sharing/data-sharing.org
/Users/jkitchin/Dropbox/CMU/projects/DOE-Early-Career/annual-reports/final-report/kitchin-DESC0004031-final-report.org
/Users/jkitchin/Dropbox/CMU/proposals/@archive/2015/DOE-renewal/proposal-v2.org
/Users/jkitchin/Dropbox/CMU/proposals/@archive/2015/DOE-renewal/archive/proposal.org
/Users/jkitchin/Dropbox/CMU/proposals/@archive/2016/DOE-single-atom-alloy/proposal.org
/Users/jkitchin/Dropbox/CMU/proposals/@archive/2016/MRSEC/MRSEC-IRG-metastable-materials-preproposal/IRG-concept.org
/Users/jkitchin/Dropbox/CMU/proposals/@archive/2016/ljaf-open-science/kitchin-proposal.org
/Users/jkitchin/Dropbox/CMU/proposals/@archive/2016/nsf-germination/project-description.org
/Users/jkitchin/Dropbox/CMU/proposals/@archive/2016/nsf-reu-supplement/project-description.org
/Users/jkitchin/Dropbox/CMU/proposals/@archive/2016/proctor-and-gamble-education/proposal.org
/Users/jkitchin/Dropbox/bibliography/notes.org
/Users/jkitchin/Dropbox/kitchingroup/jmax/org-ref/citeproc/readme.org
/Users/jkitchin/Dropbox/kitchingroup/jmax/org-ref/citeproc/readme-unsrt.org
/Users/jkitchin/Dropbox/kitchingroup/jmax/org-ref/citeproc/readme-author-year.org
/Users/jkitchin/Dropbox/kitchingroup/jmax/org-ref/tests/test-1.org
/Users/jkitchin/Dropbox/kitchingroup/jmax/org-ref/tests/sandbox/elpa/org-ref-20160122.1725/citeproc/readme.org

Obviously we could use this to generate candidates for something like helm or ivy like this.

(ivy-read "Open: " (emacsql org-db [:select [files:filename links:begin]
                                    :from links :inner :join files :on (= links:filename-id files:rowid) 
                                    :where (and (= type "cite") (like path "%kitchin-2015-examp%"))])
          :action '(1 ("o"
                       (lambda (c)
                         (find-file (car c))
                         (goto-char (nth 1 c))
                         (org-show-entry)))))
/Users/jkitchin/Dropbox/CMU/manuscripts/2015/human-readable-data/manuscript.org

Now, you can find every org-file containing any bibtex key as a citation. Since SQL is the query language, you should be able to build really sophisticated queries that combine filters for multiple citations, different kinds of citations, etc.

3 Headline queries

Every headline is stored, along with its location, tags and properties. We can use the database to find headlines that are tagged or with certain properties. You can see here I have 293 tags in the database.

(emacsql org-db [:select (funcall count) :from tags])
293

Here we find headlines tagged with electrolyte. I tagged some papers I read with this at some point.

(emacsql org-db [:select :distinct [files:filename headlines:title]
                 :from headlines :inner :join headline-tags :on (=  headlines:rowid headline-tags:headline-id)
                 :inner :join tags :on (= tags:rowid headline-tags:tag-id)
                 :inner :join files :on (= headlines:filename-id files:rowid)
                 :where (= tags:tag "electrolyte") :limit 5])
/Users/jkitchin/Dropbox/org-mode/prj-doe-early-career.org 2010 - Nickel-borate oxygen-evolving catalyst that functions under benign conditions
/Users/jkitchin/Dropbox/bibliography/notes.org 1971 - A Correlation of the Solution Properties and the Electrochemical Behavior of the Nickel Hydroxide Electrode in Binary Aqueous Alkali Hydroxides
/Users/jkitchin/Dropbox/bibliography/notes.org 1981 - Studies concerning charged nickel hydroxide electrodes IV. Reversible potentials in LiOH, NaOH, RbOH and CsOH
/Users/jkitchin/Dropbox/bibliography/notes.org 1986 - The effect of lithium in preventing iron poisoning in the nickel hydroxide electrode
/Users/jkitchin/Dropbox/bibliography/notes.org 1996 - The role of lithium in preventing the detrimental effect of iron on alkaline battery nickel hydroxide electrode: A mechanistic aspect

Here we see how many entries have an EMAIL property. These could serve as contacts to send email to.

(emacsql org-db [:select [(funcall count)] :from
                 headlines :inner :join headline-properties :on (=  headlines:rowid headline-properties:headline-id)
                 :inner :join properties :on (= properties:rowid headline-properties:property-id)
                 :where (and (= properties:property "EMAIL") (not (null headline-properties:value)))])
7452

If you want to see the ones that match "jkitchin", here they are.

(emacsql org-db [:select :distinct [headlines:title headline-properties:value] :from
             headlines :inner :join headline-properties :on (=  headlines:rowid headline-properties:headline-id)
             :inner :join properties :on (= properties:rowid headline-properties:property-id)
             :where (and (= properties:property "EMAIL") (like headline-properties:value "%jkitchin%"))])
John Kitchin jkitchin@andrew.cmu.edu
John Kitchin jkitchin@cmu.edu
Kitchin, John jkitchin@andrew.cmu.edu

Here is a query to find the number of headlines where the deadline matches 2017. Looks like I am already busy!

(emacsql org-db [:select (funcall count) :from
             headlines :inner :join headline-properties :on (=  headlines:rowid headline-properties:headline-id)
             :inner :join properties :on (= properties:rowid headline-properties:property-id)
             :where (and (= properties:property "DEADLINE") (glob headline-properties:value "*2017*"))])
50

4 Keyword queries

We also store file keywords, so we can search on document titles, authors, etc. Here are five documents with titles longer than 35 characters sorted in descending order.

(emacsql org-db [:select :distinct [value] :from
                 file-keywords :inner :join keywords :on (= file-keywords:keyword-id keywords:rowid)
                 :where (and (> (funcall length value) 35) (= keywords:keyword "TITLE"))
                 :order :by value :desc
                 :limit 5])
pycse - Python3 Computations in Science and Engineering
org-show - simple presentations in org-mode
org-mode - A Human Readable, Machine Addressable Approach to Data Archiving and Sharing in Science and Engineering
modifying emacs to make typing easier.
jmax - John's customizations to maximize Emacs

It is possible to search on AUTHOR, and others. My memos have a #+SUBJECT keyword, so I can find memos on a subject. They also use the LATEX_CLASS of cmu-memo, so I can find all of them easily too:

(emacsql org-db [:select [(funcall count)] :from
                 file-keywords :inner :join keywords :on (= file-keywords:keyword-id keywords:rowid)
                 :where (and (= value "cmu-memo") (= keywords:keyword "LATEX_CLASS"))
                 :limit 5])
119

How about that, 119 memos… Still it sure is nice to be able to find them.

5 Full text search

In theory, the database has a table for the headline content, and it should be fully searchable. I found the database got a little sluggish, and nearly 1/2 a GB in size when using it so I am leaving it out for now.

6 Summary

The foundation for something really good is here. It is still a little tedious to wrote the queries with all the table joins, but some of that could be wrapped into a function for a query. I like the lispy style of the queries, although it can be tricky to map all the concepts onto SQL. A function that might wrap this could look like this:

(org-db-query (and (= properties:property "DEADLINE") (glob headline-properties:value "*2017*")))

This is what it would ideally look like using the org tag/property match syntax. Somehow that string would have to get expanded to generate the code above. I do not have a sense for how difficult that would be. It might not be hard with a recursive descent parser, written by the same author as emacsql.

(org-db-query "DEADLINE={2017}")

The performance is only ok. For large org files there is a notable lag in updating the database, which is notable because while updating, Emacs is blocked. I could try using an idle timer for updates with a queue, or get more clever about when to update. It is not essential that the updates be real-time, only that they are reasonably accurate or done by the time I next search. For now, it is not too annoying though. As a better database, I have had my eye on xapian since that is what mu4e (and notmuch) uses. It might be good to have an external library for parsing org-files, i.e. not through emacs, for this. It would certainly be faster. It seems like a big project though, maybe next summer ;)

Another feature this might benefit from is ignore patterns, or some file feature that prevents it from being indexed. For example, I keep an encrypted password file in org-mode, but as soon as I opened it, it got indexed right into the database, in plain text. If you walk your file system, it might make sense to avoid some directories, like .dropbox.cache. Otherwise, this still looks like a promising approach.

Copyright (C) 2017 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.0.3

Discuss on Twitter

Context-specific org-mode speed keys

| categories: orgmode, emacs | tags:

I have been using org-mode to make a contact database. A contact is basically just a headline with an EMAIL property, e.g. https://julien.danjou.info/projects/emacs-packages#org-contacts. I thought it would be nice to have an org-mode speed key so that if I was at the beginning of a contact headline, I could just press "e" to open an email buffer to that contact. This might generally be useful to have different speed keys that serve different purposes or are only defined on specific types of headlines.

Org-mode already had this feature in mind for speed keys. All you have to do is define the list of speed keys and their functions, provide a function that picks the right one, and add it to the org-speed-command-hook. Here is the code that makes this possible. This defines "c" to copy the email to the clipboard, "e" to email the contact, and "m" to copy a "name <email>" string to the clipboard, but only when you are on a headline with an EMAIL property. If there is not a contact specific speed key defined, then a user-defined speed key or a default key will be used if it is defined. In case I do not remember the keys, "?" will show them to me. It is small hack, but if you end up using the contact headlines for much, it might be really helpful as an alternative to M-x some-contacts-command.

(setq org-speed-commands-contacts
      '(("c" . (lambda ()
                 "Copy the email address to the clipboard."
                 (message (kill-new (org-entry-get (point) "EMAIL")))))
        ("e" . (lambda ()
                 "Send an email to the contact."
                 (let ((email (org-entry-get (point) "EMAIL")))
                   (compose-mail)
                   (message-goto-to)
                   (insert email)
                   (message-goto-subject)))) 
        ("m" . (lambda ()
                 "Copy \"name <email>\""
                 (message (kill-new
                           (format "%s <%s>"
                                   (nth 4 (org-heading-components))
                                   (org-entry-get (point) "EMAIL"))))))
        ("?" . (lambda ()
                 "Print contacts speed key help."
                 (with-output-to-temp-buffer "*Help*"
                   (princ "Contacts Speed commands\n===========================\n")
                   (mapc #'org-print-speed-command org-speed-commands-contacts)
                   (princ "\n")
                   (princ "User-defined Speed commands\n===========================\n")
                   (mapc #'org-print-speed-command org-speed-commands-user)
                   (princ "Built-in Speed commands\n=======================\n")
                   (mapc #'org-print-speed-command org-speed-commands-default))
                 (with-current-buffer "*Help*"
                   (setq truncate-lines t))))))

(defun org-speed-contacts (keys)
  (when (and (bolp) (looking-at org-outline-regexp)
             (not (null (org-entry-get (point) "EMAIL"))))
    (cdr (assoc keys org-speed-commands-contacts))))

(add-hook 'org-speed-command-hook 'org-speed-contacts)

Copyright (C) 2016 by John Kitchin. See the License for information about copying.

org-mode source

Org-mode version = 9.0

Discuss on Twitter
« Previous Page -- Next Page »