## A simple emacs-lisp interface to CRUD operations in mongodb

| categories: | tags: | View Comments

In this post I showed that MongoDB is pretty easy to use for simple database applications. I showed a way to get data out of the database that is native to Emacs, but to use Mongo in emacs applications comfortably, it would be really helpful to be able to create, read, update and delete (CRUD) entries. There is a minimal interface to mongodb for emacs-lisp here: https://github.com/m2ym/mongo-el. From what I can see, it seems limited to simple, single queries, and it is written with advanced features of emacs-lisp I do not understand enough to extend it. In the last post, I showed an easy way to use mongoexport to get data from a query out of a mongo database. Here I explore a similar approach to round out the CRUD (create, read, update and delete) operations for using emacs-lisp to work with mongodb. This will enable Emacs to easily use MongoDB in applications.

We use the mongo cli with the –eval option, which allows you to run commands on the database. The basic idea is to generate the json we need from a lisp data structure, and use that json in mongo commands as needed. This sounds simple, but below you will see there are plenty of corners to take care of.

The goal here is to get something that is pretty functional. It will not be able to support all the capabilities of MongoDB and the options available in the cli.

## 1 Inserting entries

Here we insert a document into the contacts collection of the contacts database. As in the Python example we considered earlier, this database is automatically created when we run this command.

(require 'json))
(let* ((json (json-encode '((first-name . "John")
(last-name . "Kitchin")
(email . "jkitchin@cmu.edu"))))
(cmd (format "mongo 127.0.0.1/contacts --quiet --eval 'db.contacts.insert(%s)'"
json)))
(shell-command-to-string cmd))

json


Here is a function we can use for inserting, and as you can see it works for multiple inserts too. There is a limit on how long the json string can be for this, so you cannot add too many entries at once with this. I do not know what the limit is, and suspect it is related to using a shell command. When this succeeds there is data returned about what happened, which we try to get in lisp form. Also, I noticed I had to do a little bit of escaping, especially for entries containing a single quote, which messes up the quoting on the shell command, and for non-ascii characters which the shell did not handle well. Maybe this could be avoided with a file-based approach, or if we used a pipe to a process.

(defun mongo-insert (db collection document)
"Insert into DB.COLLECTION the DOCUMENT.
DOCUMENT will be some lisp structure that is converted to json."
;; we have to escape quote any single quotes. This came from
;; http://stackoverflow.com/questions/1250079/how-to-escape-single-quotes-within-single-quoted-strings
(let* ((json (replace-regexp-in-string "'" "'\"'\"'" (json-encode document)))
;; it seems utf-8 characters may cause issues. Let's just remove them.
(json (replace-regexp-in-string "[^[:ascii:]]" "" json))
(cmd (format "mongo %s --quiet --eval 'db.%s.insert(%s)'"
db collection
json))
(output (shell-command-to-string cmd)))
(cond
((string-match "BulkWriteResult(" output)
((string-match "WriteResult(" output)
(t
output))))

mongo-insert


Here it is in action.

(mongo-insert "contacts" "contacts"
'(((first-name . "John")
(last-name . "Kitchin")
(email . "jkitchin@cmu.edu"))
((first-name . "Someone")
(last-name . "Else")
("email" . "someone@out.there"))))

((writeErrors . []) (writeConcernErrors . []) (nInserted . 2) (nUpserted . 0) (nMatched . 0) (nModified . 0) (nRemoved . 0) (upserted . []))


Seems like an ok way to get data from Emacs into a Mongo DB, and we get lisp data returned telling us what happened.

## 2 Finding a document

To update documents we need to find them. We would like to find a document by the _id, but we have a small dilemma. The json we need for that needs to look like: {"_id": ObjectId("587babfaef131d0d4603b3ad")}, where the ObjectId is not quoted. The json library does not seem to be able to do that. So, we have to modify our find code to do this. This is possible by manipulating the json string after it is generated with regular expression replacement. It feels hacky, and hopefully there are not many more examples of that. If there are, we will need another approach to generating the json data. Here is the modified find function, also with the projection option. Here is another place we have to tread somewhat lightly with the _id, in this case we have to requote it so that it can be read by emacs. It might make sense to just replace it with the quoted _id string, rather than the ObjectId call. Time will tell.

Here we create two helper functions to unquote input, and requote output. We also need some code to make an array of all the results, and put commas between all the results so that we end up with valid json in the output.

(defun mongo-unquote-query (query)
"Json encodes QUERY, and unquotes any ObjectId calls.

We don't have syntax for the ObjectId call that mongo wants in
lisp, so a query has to look like this:

Mongo can't have the quotes around the call, so this function
removes them.
"
(replace-regexp-in-string "\"\$$ObjectID(\\\\\"\\(.*?\$$\\\\\")\\)\""
"ObjectId(\"\\2\")"
(json-encode query)))

(defun mongo-requote-output (output)
"Adds quotes around ObjectId in OUTPUT.
When mongo outputs json, it has unquoted ObjectIds in it that
emacs cannot interpret as json. "
(replace-regexp-in-string
"ObjectId(\"\$$.*?\$$\")"
"\"ObjectId(\\\\\"\\1\\\\\")\""
output))

(defun mongo-find (db collection query &optional projection)
(let* ((query-json (mongo-unquote-query query))
(projection-json
(and projection (json-encode projection)))
(output (mongo-requote-output
;; add [] to make an array of output in json,
;; and separate results by a comma
(concat "["
(replace-regexp-in-string
"\n" ""
(shell-command-to-string
(format "mongo %s --quiet --eval 'db.%s.find(%s).forEach(function(myDoc) { printjsononeline(myDoc); print( \",\"); })'"
db collection
(if projection
(format "%s, %s" query-json projection-json)
query-json))))
"]"))))

mongo-find


So, finally we can run something like this:

(mongo-find "contacts" "contacts" '((email . "someone@out.there")))

[((_id . "ObjectId(\"587c166cdfcd649d3acf99fd\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there")) ((_id . "ObjectId(\"587c16ad410565dd4c16c748\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there")) ((_id . "ObjectId(\"587c17550e586b4f8df21de0\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there")) ((_id . "ObjectId(\"587c1764d75279a55ffec483\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there")) ((_id . "ObjectId(\"587c17743281f1e9d5054396\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there")) ((_id . "ObjectId(\"587c178ad92706d2bd5a6e3c\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there")) ((_id . "ObjectId(\"587c1794756bb2bd0f0ac499\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there"))]


Here is an example usage with a projection that returns only the information you want, in this case, just the id.

(mongo-find "contacts" "contacts" '((email . "someone@out.there"))
'((_id . 1)))

[((_id . "ObjectId(\"587c166cdfcd649d3acf99fd\")")) ((_id . "ObjectId(\"587c16ad410565dd4c16c748\")")) ((_id . "ObjectId(\"587c17550e586b4f8df21de0\")")) ((_id . "ObjectId(\"587c1764d75279a55ffec483\")")) ((_id . "ObjectId(\"587c17743281f1e9d5054396\")")) ((_id . "ObjectId(\"587c178ad92706d2bd5a6e3c\")")) ((_id . "ObjectId(\"587c1794756bb2bd0f0ac499\")"))]


## 3 Updating an entry

Ok, back to the update. To make sure that we update exactly the document we want, we will use the document _id. First, we define an update command.

(defun mongo-update (db collection query $set) "In DB.COLLECTION update records matching QUERY with the contents of$SET."
(let* ((query-json (mongo-encode-query query))
($set-json (mongo-encode-query$set))
(cmd (format "mongo %s --quiet --eval 'db.%s.update(%s, %s)'"
db collection
query-json $set-json)) (output (shell-command-to-string cmd))) (if (string-match "WriteResult(" output) (json-read-from-string (substring output 12 -2)) output)))  mongo-update  First a reminder of what is in this record. (mongo-find "contacts" "contacts" '((_id . "ObjectId(\"587c16ad410565dd4c16c748\")")))  [((_id . "ObjectId(\"587c16ad410565dd4c16c748\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there"))]  Here we set the email field to a new address. Without$set, the whole document gets replaced.

(mongo-update "contacts" "contacts"
'(($set . ((email . "someone@out.there.com")))))  ((nMatched . 1) (nUpserted . 0) (nModified . 1))  Finally, let's see the document again to verify it is modified. (mongo-find "contacts" "contacts" '((_id . "ObjectId(\"587c16ad410565dd4c16c748\")")))  [((_id . "ObjectId(\"587c16ad410565dd4c16c748\")") (first-name . "Someone") (last-name . "Else") (email . "someone@out.there.com"))]  Looks good, you can see it got changed. There is a potential gotcha though. This next command looks like it should do the same thing, but it does not. The whole document gets replaced! (mongo-update "contacts" "contacts" '((_id . "ObjectId(\"587c16ad410565dd4c16c748\")")) '((email . "someone@out.there.com")))  ((nMatched . 1) (nUpserted . 0) (nModified . 1))  (mongo-find "contacts" "contacts" '((_id . "ObjectId(\"587c16ad410565dd4c16c748\")")))  [((_id . "ObjectId(\"587c16ad410565dd4c16c748\")") (email . "someone@out.there.com"))]  Do not forget the$set operator if you just want to update some fields!

## 4 Deleting a document

Next, let's get a delete function. I will only implement the deleteMany function here since you can give it a document id to delete only one, and usually I would want to delete all documents that meet a criteria anyway.

(defun mongo-deleteMany (db collection filter)
"Delete records in DB.COLLECTION matched by FILTER.
(let* ((filter-json (mongo-encode-query filter))
(cmd (format "mongo %s --quiet --eval 'db.%s.deleteMany(%s)'"
db collection
filter-json))
(output (shell-command-to-string cmd)))

mongo-deleteMany


Since we borked that last document, let's just delete it.

(mongo-deleteMany "contacts" "contacts" '((_id . "ObjectId(\"587be3fa6009a569a277b680\")")))

((acknowledged . t) (deletedCount . 0))


## 5 Generic commands

We may want some flexibility to run collection commands. The most generic command will simply be to write the shell-command completely. We can keep a little syntax by encapsulating most of the boilerplate though. Here is a function for that.

(defun mongo-cmd (db collection cmd &rest args)
"In DB.COLLECTION run CMD.
ARGS if present will be used to format CMD."
(shell-command-to-string
(format "mongo %s --quiet --eval 'db.%s.%s'"
db collection
(apply #'format cmd args))))

mongo-cmd


We can get the number of documents with this:

(mongo-cmd "contacts" "contacts" "count()")

4341


Or run a more sophisticated command with arguments like this.

(mongo-cmd "contacts" "contacts" "explain().remove(%s)" (json-encode '(("category" . "enemy"))))

{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "contacts.contacts",
"indexFilterSet" : false,
"parsedQuery" : {
"category" : {
"$eq" : "enemy" } }, "winningPlan" : { "stage" : "DELETE", "inputStage" : { "stage" : "COLLSCAN", "filter" : { "category" : { "$eq" : "enemy"
}
},
"direction" : "forward"
}
},
"rejectedPlans" : [ ]
},
"serverInfo" : {
"host" : "Johns-MacBook-Air.local",
"port" : 27017,
"version" : "3.4.1",
"gitVersion" : "5e103c4f5583e2566a45d740225dc250baacfbd7"
},
"ok" : 1
}


Or, drop the collection with:

(mongo-cmd "contacts" "contacts" "drop()")

true


All gone! Note, we do not try to handle the output of any of those, and they are returned as strings.

## 6 A MongoDB contacts database

Now, let's re-populate it for real. I store my contacts in a variable called "contacts" as a list of a descriptive string and then cons cells. These are actually harvested from a set of org-files. It is way to slow to parse these files each time, so I keep the contacts cached in memory and only update them if a file changes.

(length contacts)

6047


There are over 6000 contacts. Let's put them in a MongoDB.

Here is a limitation of our approach. This will not work because the generated shell command ends up being too long for the shell.

(mongo-insert "contacts" "contacts"
(loop for contact in contacts
collect
(append ((desc . ,(car contact))) (cdr contact))))


So, we do them one at time here:

(let ((ct (current-time)))
(loop for contact in contacts
do
(let ((output (mongo-insert "contacts" "contacts"
(append ((desc . ,(car contact))) (cdr contact)))))
(unless (= 1 (cdr (assoc 'nInserted output)))
(warn "error: %S for %S" (cdr (assoc 'nInserted output)) contact))))
(message "Elapsed time %.02f seconds" (float-time (time-since ct))))

Elapsed time 762.95 seconds


That took a little over 10 minutes to add. That seems long to me. This next step confirms that they were added.

(mongo-cmd "contacts" "contacts" "count()")

6047


Next we will compare some timing of finding data in the database vs looping through the cached contacts. Here is a timing macro to measure how long it takes to run a bit of code.

;; http://stackoverflow.com/questions/23622296/emacs-timing-execution-of-function-calls-in-emacs-lisp
(defmacro measure-time (&rest body)
"Measure the time it takes to evaluate BODY."
(let ((time (current-time)))
,@body
(message "%.06f seconds elapsed" (float-time (time-since time)))))

measure-time


Here is the old way I would extract data. Many contacts I have are academics, and I have stored their academic ranks in each contact.

(loop for contact in contacts
if (string= "Professor" (cdr (assoc "RANK" (cdr contact))))
collect contact into professors
if (string= "Associate Professor" (cdr (assoc "RANK" (cdr contact))))
collect contact into associate-professors
if (string= "Assistant Professor" (cdr (assoc "RANK" (cdr contact))))
collect contact into assistant-professors
finally return (("Assistant Professor" ,(length assistant-professors))
("Associate Professor" ,(length associate-professors))
("Professor" ,(length professors))))

 Assistant Professor 313 Associate Professor 283 Professor 879

How long did it take to do that?

(measure-time
(loop for contact in contacts
if (string= "Professor" (cdr (assoc "RANK" (cdr contact))))
collect contact into professors
if (string= "Associate Professor" (cdr (assoc "RANK" (cdr contact))))
collect contact into associate-professors
if (string= "Assistant Professor" (cdr (assoc "RANK" (cdr contact))))
collect contact into assistant-professors
finally return (list (length assistant-professors)
(length associate-professors)
(length professors))))

0.008772 seconds elapsed


Not long at all! Comparatively, it is very slow to get this information out of the mongodb, although considerably less code is required. That might not be surprising, considering the json parsing that has to get done here.

Here is the equivalent code to extract that data from the database.

(loop for rank in '("Assistant Professor" "Associate Professor" "Professor")
collect (list rank (length (mongo-find "contacts" "contacts"
((RANK . ,rank))))))

 Assistant Professor 313 Associate Professor 283 Professor 879

It is comparatively slow to do this. This requires three json parses, and profiling indicates that alot of the work is done in parsing the json.

(measure-time
(loop for rank in '("Assistant Professor" "Associate Professor" "Professor")
collect (list rank (length (mongo-find "contacts" "contacts"
((RANK . ,rank)))))))

1.914817 seconds elapsed


Here is smarter way to do it that avoids the json parsing.

(loop for rank in '("Assistant Professor" "Associate Professor" "Professor")
collect (list rank (mongo-cmd "contacts" "contacts" "count(%s)"
(json-encode ((RANK . ,rank))))))

 Assistant Professor 313 Associate Professor 283 Professor 879

And you can see here it is about 10 times faster, but not nearly as fast as running the lisp code on the cache.

(measure-time
(loop for rank in '("Assistant Professor" "Associate Professor" "Professor")
collect (list rank (mongo-cmd "contacts" "contacts" "count(%s)"
(json-encode ((RANK . ,rank)))))))

0.349413 seconds elapsed


This is how you might integrate this into a completion command:

(ivy-read "choose: "
(loop for c across (mongo-find "contacts" "contacts" "")
collect
(list (cdr (assoc 'desc c)) c)))


This is basically unusable though, because it takes so long to generate the candidates (over six seconds).

(measure-time
(loop for c across (mongo-find "contacts" "contacts" "")
collect
(list (cdr (assoc 'desc c)) c)))

6.228225 seconds elapsed


We can get back to usable by making the database do more work for us. Here, we simply make the database print a list of cons cells that we can read into lisp. We have to use a javascript function, with some escaping and quoting. The escaping was necessary because there is some bad data in the email field that messed up the cons cells, e.g. some things like "name" <email> with nested single and double quoting, etc., and the quoting was necessary to get cons cells of the form ("desc" . "email"), and finally we wrap them in parentheses and read back the list of cons cells. At about a quarter of a second, this is very usable to get a list of over 6000 candidates. It is still many times slower than working on the contacts list in memory though. I am not a super fan of the one-line javascript, and if it was much more complicated than this another strategy would probably be desirable.

(measure-time
(concat
"("
(shell-command-to-string "mongo contacts --quiet --eval 'db.contacts.find().forEach(function (doc) {print(\"(\\\"\" + doc.desc + \"\\\" . \\\"\" + escape(doc.EMAIL) +\"\\\")\");})'")
")")))

0.284730 seconds elapsed


## 7 Text searching

Finally, let us make a text index to make searching easy. This allows us a very flexible search where we do not have to specify what field or use regular expressions. We setup the index on all the fields, so we can find entries that match even on fields that do not exist in all documents.

(mongo-cmd "contacts" "contacts" "createIndex(%s)" (json-encode '(("$**" . "text"))))  { "createdCollectionAutomatically" : false, "numIndexesBefore" : 1, "numIndexesAfter" : 2, "ok" : 1 }  Now, let's use that to find the GOOGLE-SCHOLAR url of contacts matching the following query. (mongo-find "contacts" "contacts" '(($text . (($search . "\"Carnegie Mellon\""))) ($text . (($search . "\"John Kitchin\"")))) '((GOOGLE-SCHOLAR . 1) (_id . 0)))  [((GOOGLE-SCHOLAR . "https://scholar.google.com/citations?hl=en&user=jD_4h7sAAAAJ")) nil nil]  So, you can see there were three contacts, and one of them lists my google-scholar url. ## 8 Summary This looks like the foundation of a mongo/emacs-lisp interface. This interface is not that fast though, and suffers from some limitations related to the use of the shell. Depending on the actual use, it is clear you can gain performance by passing some work on the database, which requires some javascript coding. Even that revealed some subtlety, e.g. making sure the database output text that was compatible with the lisp reader. That mostly means taking care of quotes, and other special characters, which I managed with a simple escape mechanism. It is probably worth investing a few more days in building an interface that uses a process and communicates with the mongo cli directly before moving forward with any significant application that uses Mongo in emacs. There are many good ideas for that: 1. Index all your org files (e.g. http://kitchingroup.cheme.cmu.edu/blog/2017/01/03/Find-stuff-in-org-mode-anywhere/) 2. Index all your bibtex files (e.g. http://kitchingroup.cheme.cmu.edu/blog/2017/01/15/Querying-a-MongoDB-bibtex-database-with-Python-and-emacs-lisp/) 3. Download RSS feeds into a searchable database 4. Manage your contacts 5. Index your email? mu and notmuch use xapian for this, but I have found they cannot search for things like hashtags. Maybe MongoDB would be better? The tradeoffs between this and sqlite are more clear now. With Mongo we do not have to create the normalized tables (although it is still a good idea to think about how to structure the documents, and if they should be a little normalized). It is much easier to map lisp data structures to Mongo queries than it is to do that with SQL queries. On the other hand, it is necessary to do some javascript programming with Mongo to get some desired output. It still seems worth exploring further. Copyright (C) 2017 by John Kitchin. See the License for information about copying. org-mode source Org-mode version = 9.0.3 Read and Post Comments ## Querying a MongoDB bibtex database with Python and emacs-lisp | categories: | tags: | View Comments ## Table of Contents I have been exploring using databases to help with searching my data. In this post we explore using MongoDB for bibtex entries. I am choosing bibtex entries because it is easy to parse bibtex files, I already have a lot of them, and I have several kinds of queries I regularly use. So, they are a good candidate to test out a new database on! MongoDB is a noSQL database that is pretty easy to use. I installed it from homebrew, and then followed the directions to run the server. With pymongo you can make a database as easy as this: import bibtexparser # Read the bibtex file to get entries with open('../../../Dropbox/bibliography/references.bib', 'r') as bibfile: bp = bibtexparser.load(bibfile) entries = bp.entries print("N = ", len(entries)) print(entries[0]) import pymongo from pymongo import MongoClient client = MongoClient('localhost', 27017) # This creates the "entries" collection db = client['bibtex'].entries # add each entry for entry in entries: db.insert_one(entry)  N = 1671 {'keyword': 'test, word', 'year': '2006', 'publisher': 'American Chemical Society (ACS)', 'title': 'The ACS Style Guide', 'ENTRYTYPE': 'book', 'editor': 'Janet S. Dodd', 'address': 'Washington, D.C.', 'ID': '2006-acs-style-guide', 'doi': '10.1021/bk-2006-styg', 'link': 'https://doi.org/10.1021/bk-2006-STYG', 'date_added': 'Wed Apr 1 10:17:54 2015', 'pages': 'nil'} That was easy. We have a database with 1671 documents in it, and each document is essentially a dictionary of key-value pairs. You might even argue it was too easy. I didn't specify any structure to the entries at all. No required fields, no validation that the keys are spelled correctly, no validation on the values, e.g. you can see the year looks like a string. The benefit of that is that every entry went in, with no issues. On the other hand, the authors went in as a single string, as did the keywords, which affects our ability to search a little bit later. Note if you run that twice, it will add each entry again, since we do not check if the entry already exists. A database is only useful though if it is easy to get stuff out of it. So, let's consider some test queries. First we find entries that have years less than 1950. The query is basically a little json bundle that describes a field and condition that we want to match. Here we use a less than operator, ""$lt"The results come back as a list of dictionaries. This is in stark contrast to a SQL query which is an expression in its own declarative language. A query here is a chunk of data that must get converted to code by the server. I am not 100% clear if the less than here is in the string sense or numeric sense, but for years it probably does not matter for a long time.

import pymongo
from pymongo import MongoClient
client = MongoClient('localhost', 27017)

db = client['bibtex'].entries

for i, result in enumerate(db.find({"year" : {"$lt": "1950"}})): print('{i: 2d}. {author}, {title}, {journal}, {year}.'.format(i=i+1, **result))  1. Birch, Francis, Finite Elastic Strain of Cubic Crystals, Phys. Rev., 1947. 2. Ditchburn, R. W. and Gilmour, J. C., The Vapor Pressures of Monatomic Vapors, Rev. Mod. Phys., 1941. 3. J. Korringa, On the Calculation of the Energy of a Bloch Wave in a Metal, Physica, 1947. 4. Nix, F. C. and MacNair, D., The Thermal Expansion of Pure Metals. {II}: Molybdenum, Palladium, Silver, Tantalum, Tungsten, Platinum, and Lead, Phys. Rev., 1942. That seems easy enough, and those strings could easily be used as candidates for a selection tool like helm. How about articles published by myself and my student Jacob Boes? This requires "and" logic. Apparently that is the default, so we just add three queries. One is an exact match on articles, and the other two are case-insensitive regular expression matches. I guess this has to be done on every document, since there probably is no way to index a regex match! This search was very fast, but it is not clear how fast it would be for a million entries. This matching is necessary because we stored all authors in a single field rather than splitting them into an array. We might still have to match strings for this even in an array since an author might then be "John R. Kitchin", rather than further decomposed into first and last names. import pymongo from pymongo import MongoClient client = MongoClient('localhost', 27017) db = client['bibtex'] entries = db['entries'] for i, result in enumerate(entries.find({"ENTRYTYPE": "article", "author" : {"$regex": "kitchin", '$options' : 'i'}, "author" : {"$regex": "boes", '$options' : 'i'}})): if result.get('doi', None): result['doi'] = 'https://doi.org/{doi}'.format(doi=result['doi']) else: result['doi'] = '' print('{i: 2d}. {author}, {title}, {journal}, {year}. {doi}'.format(i=i+1, **result).replace("\n", ""))  1. Jacob R. Boes and Peter Kondratyuk and Chunrong Yin and JamesB. Miller and Andrew J. Gellman and John R. Kitchin, Core Level Shifts in {Cu-Pd} Alloys As a Function of BulkComposition and Structure, Surface Science, 2015. https://doi.org/10.1016/j.susc.2015.02.011 2. Jacob R. Boes and Gamze Gumuslu and James B. Miller and AndrewJ. Gellman and John R. Kitchin, Estimating Bulk-Composition-Dependent \ce{H2} AdsorptionEnergies on \ce{CuxPd1-x} Alloy (111) Surfaces, ACS Catalysis, 2015. https://doi.org/10.1021/cs501585k 3. Jacob R. Boes and Gamze Gumuslu and James B. Miller and AndrewJ. Gellman and John R. Kitchin, Supporting Information: Estimating Bulk-Composition-Dependent\ce{H2} Adsorption Energies on \ce{CuxPd1-x} Alloy (111)Surfaces, ACS Catalysis, 2015. https://doi.org/10.1021/cs501585k 4. G. Gumuslu and P. Kondratyuk and J. R. Boes and B. Morrealeand J. B. Miller and J. R. Kitchin and A. J. Gellman, Correlation of Electronic Structure With Catalytic Activity:\ce{H2}-\ce{D2} Exchange Across \ce{CuxPd1-x}Composition Space, ACS Catalysis, 2015. https://doi.org/10.1021/cs501586t 5. John D. Michael and Ethan L. Demeter and Steven M. Illes andQingqi Fan and Jacob R. Boes and John R. Kitchin, Alkaline Electrolyte and {Fe} Impurity Effects on thePerformance and Active-Phase Structure of {NiOOH} Thin Filmsfor {OER} Catalysis Applications, J. Phys. Chem. C, 2015. https://doi.org/10.1021/acs.jpcc.5b02458 6. Jacob R. Boes and Mitchell C. Groenenboom and John A. Keithand John R. Kitchin, Neural Network and {Reaxff} Comparison for {Au} Properties, Int. J. Quantum Chem., 2016. https://doi.org/10.1002/qua.25115 7. Jacob R. Boes and John R. Kitchin, Neural Network Predictions of Oxygen Interactions on a Dynamic Pd Surface, Molecular Simulation, Accepted 12/2016. https://doi.org/10.1080/08927022.2016.1274984 8. Jacob R. Boes and John R. Kitchin, Modeling Segregation on {AuPd}(111) Surfaces With DensityFunctional Theory and Monte Carlo Simulations, Submitted to J. Phys. Chem. C, 2016. We can find out how many different entry types we have, as well as how many distinct keyword entries there are. The documents do not separate the keywords though, so this is just the unique strings of comma-separated keywords values. We would have had to split those in advance to have a list of keywords to search for a specific one beyond string matching. Curiously, in my bibtex entries, these are in a field called "keywords". It appears the bibtex parser may have changed the name to "keyword". import pymongo from pymongo import MongoClient client = MongoClient('localhost', 27017) db = client['bibtex'] entries = db['entries'] print(entries.distinct("ENTRYTYPE")) print(len(entries.distinct("keyword"))) print(entries.find({"keyword": {"$exists": "true"}})[22]['keyword'])


['book', 'article', 'techreport', 'phdthesis', 'inproceedings', 'inbook', 'mastersthesis', 'misc', 'incollection'] 176 Bildungsw{\"a}rmen, Dichtefunktionalrechnungen, Perowskite, Thermochemie

## 1 text searching

You can do text search as well. You first have to create an index on one or more fields, and then use the $text and$search operators. Here I made an index on a few fields, and then searched on it. Note that you can only have one text index, so think about it in advance! This simplifies the query a bit, we do not have to use the regex syntax for matching on a field.

import pymongo
from pymongo import MongoClient
client = MongoClient('localhost', 27017)

db = client['bibtex']
entries = db['entries']

entries.create_index([('author', pymongo.TEXT),
('title', pymongo.TEXT),
('keyword', pymongo.TEXT)], sparse=True)

for i, result in enumerate(entries.find({"$text" : {"$search": "kitchin", "$search": "boes"}})): print('{i: 2d}. {author}, {title}, {journal}, {year}.'.format(i=i, **result).replace("\n", ""))  1. G. Gumuslu and P. Kondratyuk and J. R. Boes and B. Morrealeand J. B. Miller and J. R. Kitchin and A. J. Gellman, Correlation of Electronic Structure With Catalytic Activity:\ce{H2}-\ce{D2} Exchange Across \ce{CuxPd1-x}Composition Space, ACS Catalysis, 2015. 2. Jacob R. Boes and Peter Kondratyuk and Chunrong Yin and JamesB. Miller and Andrew J. Gellman and John R. Kitchin, Core Level Shifts in {Cu-Pd} Alloys As a Function of BulkComposition and Structure, Surface Science, 2015. 3. Jacob R. Boes and Gamze Gumuslu and James B. Miller and AndrewJ. Gellman and John R. Kitchin, Estimating Bulk-Composition-Dependent \ce{H2} AdsorptionEnergies on \ce{CuxPd1-x} Alloy (111) Surfaces, ACS Catalysis, 2015. 4. Jacob R. Boes and John R. Kitchin, Neural Network Predictions of Oxygen Interactions on a Dynamic Pd Surface, Molecular Simulation, Accepted 12/2016. 5. Jacob R. Boes and John R. Kitchin, Modeling Segregation on {AuPd}(111) Surfaces With DensityFunctional Theory and Monte Carlo Simulations, Submitted to J. Phys. Chem. C, 2016. 6. Jacob R. Boes and Gamze Gumuslu and James B. Miller and AndrewJ. Gellman and John R. Kitchin, Supporting Information: Estimating Bulk-Composition-Dependent\ce{H2} Adsorption Energies on \ce{CuxPd1-x} Alloy (111)Surfaces, ACS Catalysis, 2015. 7. John D. Michael and Ethan L. Demeter and Steven M. Illes andQingqi Fan and Jacob R. Boes and John R. Kitchin, Alkaline Electrolyte and {Fe} Impurity Effects on thePerformance and Active-Phase Structure of {NiOOH} Thin Filmsfor {OER} Catalysis Applications, J. Phys. Chem. C, 2015. 8. Jacob R. Boes and Mitchell C. Groenenboom and John A. Keithand John R. Kitchin, Neural Network and {Reaxff} Comparison for {Au} Properties, Int. J. Quantum Chem., 2016. We can use this to search for documents with orgmode in a keyword or title too. import pymongo from pymongo import MongoClient client = MongoClient('localhost', 27017) db = client['bibtex'] entries = db['entries'] entries.create_index([('author', pymongo.TEXT), ('title', pymongo.TEXT), ('keyword', pymongo.TEXT)], sparse=True) for i, result in enumerate(entries.find({"$text" : {"$search": "orgmode"}})): print('{i: 2d}. {author}, {title}, {journal}, {year}.'.format(i=i, **result).replace("\n", ""))  1. John R. Kitchin, Data Sharing in Surface Science, Surface Science, 2016. 2. Zhongnan Xu and John R. Kitchin, Probing the Coverage Dependence of Site and AdsorbateConfigurational Correlations on (111) Surfaces of LateTransition Metals, J. Phys. Chem. C, 2014. 3. Xu, Zhongnan and Rossmeisl, Jan and Kitchin, John R., A Linear Response {DFT}+{U} Study of Trends in the OxygenEvolution Activity of Transition Metal Rutile Dioxides, The Journal of Physical Chemistry C, 2015. 4. Prateek Mehta and Paul A. Salvador and John R. Kitchin, Identifying Potential \ce{BO2} Oxide Polymorphs for EpitaxialGrowth Candidates, ACS Appl. Mater. Interfaces, 2015. 5. Xu, Zhongnan and Joshi, Yogesh V. and Raman, Sumathy andKitchin, John R., Accurate Electronic and Chemical Properties of 3d TransitionMetal Oxides Using a Calculated Linear Response {U} and a {DFT+ U(V)} Method, The Journal of Chemical Physics, 2015. 6. Zhongnan Xu and John R. Kitchin, Relationships Between the Surface Electronic and ChemicalProperties of Doped 4d and 5d Late Transition Metal Dioxides, The Journal of Chemical Physics, 2015. 7. Zhongnan Xu and John R Kitchin, Tuning Oxide Activity Through Modification of the Crystal andElectronic Structure: From Strain To Potential Polymorphs, Phys. Chem. Chem. Phys., 2015. 8. Jacob R. Boes and Gamze Gumuslu and James B. Miller and AndrewJ. Gellman and John R. Kitchin, Supporting Information: Estimating Bulk-Composition-Dependent\ce{H2} Adsorption Energies on \ce{CuxPd1-x} Alloy (111)Surfaces, ACS Catalysis, 2015. 9. Kitchin, John R., Examples of Effective Data Sharing in Scientific Publishing, ACS Catalysis, 2015. 10. Curnan, Matthew T. and Kitchin, John R., Effects of Concentration, Crystal Structure, Magnetism, andElectronic Structure Method on First-Principles Oxygen VacancyFormation Energy Trends in Perovskites, The Journal of Physical Chemistry C, 2014. 11. Kitchin, John R. and Van Gulick, Ana E. and Zilinski, Lisa D., Automating Data Sharing Through Authoring Tools, International Journal on Digital Libraries, 2016. 12. Jacob R. Boes and Gamze Gumuslu and James B. Miller and AndrewJ. Gellman and John R. Kitchin, Estimating Bulk-Composition-Dependent \ce{H2} AdsorptionEnergies on \ce{CuxPd1-x} Alloy (111) Surfaces, ACS Catalysis, 2015. 13. Zhongnan Xu and John R. Kitchin, Relating the Electronic Structure and Reactivity of the 3dTransition Metal Monoxide Surfaces, Catalysis Communications, 2014. 14. Spencer D. Miller and Vladimir V. Pushkarev and AndrewJ. Gellman and John R. Kitchin, Simulating Temperature Programmed Desorption of Oxygen on{P}t(111) Using {DFT} Derived Coverage Dependent DesorptionBarriers, Topics in Catalysis, 2014. 15. Hallenbeck, Alexander P. and Kitchin, John R., Effects of \ce{O_2} and \ce{SO_2} on the Capture Capacity of aPrimary-Amine Based Polymeric \ce{CO_2} Sorbent, Industrial \& Engineering Chemistry Research, 2013. ## 2 Querying from emacs-lisp It is hard to get too excited about this if it is not easy to query from emacs and get data in a form we can use in emacs ;) The json library allows us to convert lisp data structures to json pretty easily. For example: (require 'json) (json-encode '((ENTRYTYPE . article) (author . (($regex . kitchin)
($options . i))) (author . (($regex . boes)
($options . i)))))  {"ENTRYTYPE":"article","author":{"$regex":"kitchin","$options":"i"},"author":{"$regex":"boes","$options":"i"}}  So, we can use an a-list syntax to build up the query. Then we can send it to mongo using mongoexport that will return a json string that we can read back into emacs to get lisp data. Here is an example that returns a query. We print the first element here. (pp (aref (json-read-from-string (shell-command-to-string (format "mongoexport --quiet --jsonArray -d bibtex -c entries -q '%s'" (json-encode '((ENTRYTYPE . article) (author . (($regex . kitchin)
($options . i))) (author . (($regex . boes)
($options . i)))))))) 0))  ((_id ($oid . "5878d9644c114f59fe86cb36"))
(author . "Jacob R. Boes and Peter Kondratyuk and Chunrong Yin and James\nB. Miller and Andrew J. Gellman and John R. Kitchin")
(year . "2015")
(title . "Core Level Shifts in {Cu-Pd} Alloys As a Function of Bulk\nComposition and Structure")
(ENTRYTYPE . "article")
(ID . "boes-2015-core-cu")
(keyword . "DESC0004031, early-career")
(volume . "640")
(doi . "10.1016/j.susc.2015.02.011")
(issn . "0039-6028")
(journal . "Surface Science")
(pages . "127-132"))


That is pretty sweet, we get a lisp data structure we can use. We can wrap that into a reasonable looking function here:

(defun mongo-find (db collection query)
(shell-command-to-string
(format "mongoexport --quiet --jsonArray -d %s -c %s -q '%s'"
db collection (json-encode query)))))

mongo-find


Now we can use the function to query the database, and then format the results. Here we look at the example of articles with authors that match "kitchin" and "boes".

(loop for counter from 1 for entry across
(mongo-find "bibtex" "entries" '((ENTRYTYPE . article)
(author . (($regex . kitchin) ($options . i)))
(author . (($regex . boes) ($options . i)))))
do
(setq entry (append (,(cons "counter" counter)) entry))
;; make sure we have a doi field.
(if (assoc 'doi entry)
(push (cons "doi" (format "https://doi.org/%s" (cdr (assoc 'doi entry)))) entry)
(push (cons "doi" "") entry))
concat
(concat (replace-regexp-in-string
"\n" " "
(s-format "${counter}.${author}, ${title} (${year}). \${doi}"
'aget entry)) "\n"))

1. Jacob R. Boes and Peter Kondratyuk and Chunrong Yin and James B. Miller and Andrew J. Gellman and John R. Kitchin, Core Level Shifts in {Cu-Pd} Alloys As a Function of Bulk Composition and Structure (2015). https://doi.org/10.1016/j.susc.2015.02.011
2. Jacob R. Boes and Gamze Gumuslu and James B. Miller and Andrew J. Gellman and John R. Kitchin, Estimating Bulk-Composition-Dependent \ce{H2} Adsorption Energies on \ce{Cu_{x}Pd_{1-x}} Alloy (111) Surfaces (2015). https://doi.org/10.1021/cs501585k
3. Jacob R. Boes and Gamze Gumuslu and James B. Miller and Andrew J. Gellman and John R. Kitchin, Supporting Information: Estimating Bulk-Composition-Dependent \ce{H2} Adsorption Energies on \ce{Cu_{x}Pd_{1-x}} Alloy (111) Surfaces (2015). https://doi.org/10.1021/cs501585k
4. G. Gumuslu and P. Kondratyuk and J. R. Boes and B. Morreale and J. B. Miller and J. R. Kitchin and A. J. Gellman, Correlation of Electronic Structure With Catalytic Activity: \ce{H2}-\ce{D2} Exchange Across \ce{Cu_{x}Pd_{1-x}} Composition Space (2015). https://doi.org/10.1021/cs501586t
5. John D. Michael and Ethan L. Demeter and Steven M. Illes and Qingqi Fan and Jacob R. Boes and John R. Kitchin, Alkaline Electrolyte and {Fe} Impurity Effects on the Performance and Active-Phase Structure of {NiOOH} Thin Films for {OER} Catalysis Applications (2015). https://doi.org/10.1021/acs.jpcc.5b02458
6. Jacob R. Boes and Mitchell C. Groenenboom and John A. Keith and John R. Kitchin, Neural Network and {Reaxff} Comparison for {Au} Properties (2016). https://doi.org/10.1002/qua.25115
7. Jacob R. Boes and John R. Kitchin, Neural Network Predictions of Oxygen Interactions on a Dynamic Pd Surface (Accepted 12/2016). https://doi.org/10.1080/08927022.2016.1274984
8. Jacob R. Boes and John R. Kitchin, Modeling Segregation on {AuPd}(111) Surfaces With Density Functional Theory and Monte Carlo Simulations (2016).


Wow, that looks like a pretty lispy way to query the database and use the results. It is probably pretty easy to do similar things for inserting and updating documents. I will save that for another day.

## 3 Summary thoughts

This is not an exhaustive study of Mongo for a bibtex database. It does illustrate that it is potentially useful. Imagine a group of users can enter bibtex entries, and then share them through a central server. Or you query the server for entries and then select them using helm/ivy. That is probably faster than parsing large bibtex files (note, in org-ref I already cache the files in parsed form for performance reasons!).

It would make sense to split the authors, and keywords in another version of this database. It also could make sense to have a field that is the bibtex string, and to do text search on that string. That way you get everything in the entry for searching, and an easy way to generate bibtex files without having to reconstruct them.

It is especially interesting to run the queries through emacs-lisp since we get the benefit of editing lisp code while writing the query, e.g. parenthesis navigation, less quoting, etc… and we get back lisp data that can be used to construct helm/ivy queries, or other emacs things. That makes this look competitive with emacsql at least for the syntax. I predict that there will be more posts on this in the future.

org-mode source

Org-mode version = 9.0.3

## Find stuff in org-mode anywhere

| categories: | tags: | View Comments

I use org-mode extensively. I write scientific papers, keep notes on meetings, write letters of recommendation, notes on scientific articles, keep TODO lists in projects, help files for software, write lecture notes, students send me homework solutions in it, it is a contact database, … Some files are on Dropbox, Google Drive, Box, some in git repos, etc. The problem is that leads to org-files everywhere on my hard drive. At this point I have several thousand org-files that span about five years of work.

It is not that easy after a while to find them. Yes there are things like recent-files, bookmarks, counsel-find-file, helm-for-files, counsel/helm-locate, helm/counsel-grep/ag/pt, projectile for searching within a project, a slew of tools to search open buffers, there is recoll, etc… There are desktop search tools, and of course, good organization habits. Over a five year time span though, these change, and I have yet to find a solution to finding what I want. What about a file I made a year ago that is not in the current directory or this project, and not in my org-agenda-files list? How do I get a dynamic todo list across all these files? Or find all the files that cite a particular bibtex entry, or that were authored by a particular student?

Previously, I indexed org files with Swish-e to make it easy to search them, with an ability to search just headlines, or paragraphs, etc. The problem with that is the nightly indexing was slow since I basically had to regenerate the database each time due to limitations in Swish-e. Finally I have gotten around to the next iteration of this idea, which is a better database. In this post, I explore using sqlite to store headlines and links in org-files.

The idea is that anytime I open or save any org file, it will be added/updated in the database. The database will store the headlines and its properties and content, as well as the location and properties of all links and file keywords. That means I should be able to efficiently query all org files I have ever visited to find TODO headlines, tagged headlines, different types of links, etc. Here we try it out and see if it is useful.

## 1 The database design

I used emacsql to create and interact with a sqlite3 database. It is a lispy way to generate SQL queries. I will not talk about the code much here, you can see this version org-db.el . The database design consists of several tables that contain the filenames, headlines, tags, properties, (optionally) headline-content, headline-tags, headline-properties, and links. The lisp code is a work in progress, and not something I use on a daily basis yet. This post is a proof of concept to see how well this approach works.

I use hooks to update the database when an org-file is opened (only if it is different than what is in the database based on an md5 hash) and when it is saved. Basically, these functions delete the current entries in the database for a file, then use regular expressions to go to each headline or link in the file, and add data back to the database. I found this to be faster than parsing the org-file with org-element especially for large files. Since this is all done by a hook, anytime I open an org-file anywhere it gets added/updated to the database. The performance of this is ok. This approach will not guarantee the database is 100% accurate all the time (e.g. if something modifies the file outside of emacs, like a git pull), but it doesn't need to be. Most of the files do not change often, the database gets updated each time you open a file, and you can always reindex the database from files it knows about. Time will tell how often that seems necessary.

emacsql lets you use lisp code to generate SQL that is sent to the database. Here is an example:

(emacsql-flatten-sql [:select [name] :from main:sqlite_master :where (= type table)])

SELECT name FROM main.sqlite_master WHERE type = "table";


There are some nuances, for example, main:sqlite_master gets converted to main.sqlite_master. You use vectors, keywords, and sexps to setup the command. emacsql will turn a name like filename-id into filename_id. It was not too difficulty to figure out, and the author of emacsql was really helpful on a few points. I will be referring to this post in the future to remember some of these nuances!

Here is a list of tables in the database. There are a few primary tables, and then some that store tags, properties, and keywords on the headlines. This is typical of emacsql code; it is a lisp expression that generates SQL. In this next expression org-db is a variable that stores the database connection created in org-db.el.

(emacsql org-db [:select [name] :from main:sqlite_master :where (= type table)])


Here is a description of the columns in the files table:

(emacsql org-db [:pragma (funcall table_info files)])

 0 rowid INTEGER 0 nil 1 1 filename 0 nil 0 2 md5 0 nil 0

(emacsql org-db [:pragma (funcall table_info headlines)])

 0 rowid INTEGER 0 nil 1 1 filename_id 0 nil 0 2 title 0 nil 0 3 level 0 nil 0 4 todo_keyword 0 nil 0 5 todo_type 0 nil 0 6 archivedp 0 nil 0 7 commentedp 0 nil 0 8 footnote_section_p 0 nil 0 9 begin 0 nil 0

The database is not large if all it has is headlines and links (no content). It got up to half a GB with content, and seemed a little slow, so for this post I leave the content out.

du -hs ~/org-db/org-db.sqlite

 56M /Users/jkitchin/org-db/org-db.sqlite

Here we count how many files are in the database. These are just the org-files in my Dropbox folder. There are a lot of them! If I include all the org-files from my research and teaching projects this number grows to about 10,000! You do not want to run org-map-entries on that. Note this also includes all of the org_archive files.

(emacsql org-db [:select (funcall count) :from files])

 1569

Here is the headlines count. You can see there is no chance of remembering where these are because there are so many!

(emacsql org-db [:select (funcall count) :from headlines])

 38587

(emacsql org-db [:select (funcall count) :from links])

 303739

That is a surprising number of links.

## 2 Querying the link table

Let's see how many are cite links from org-ref there are.

(emacsql org-db [:select (funcall count) :from links :where (= type "cite")])

 14766

Wow, I find that to also be surprisingly large! I make a living writing proposals and scientific papers, and I wrote org-ref to make that easier, so maybe it should not be so surprising. We can search the link database for files containing citations of "kitchin-2015-examp" like this. The links table only stores the filename-id, so we join it with the files table to get useful information. Here we show the list of files that contain a citation of that reference. It is a mix of manuscripts, proposals, presentations, documentation files and notes.

(emacsql org-db [:select :distinct [files:filename]
:where (and (= type "cite") (like path "%kitchin-2015-examp%"))])


Obviously we could use this to generate candidates for something like helm or ivy like this.

(ivy-read "Open: " (emacsql org-db [:select [files:filename links:begin]
:where (and (= type "cite") (like path "%kitchin-2015-examp%"))])
:action '(1 ("o"
(lambda (c)
(find-file (car c))
(goto-char (nth 1 c))
(org-show-entry)))))

/Users/jkitchin/Dropbox/CMU/manuscripts/2015/human-readable-data/manuscript.org


Now, you can find every org-file containing any bibtex key as a citation. Since SQL is the query language, you should be able to build really sophisticated queries that combine filters for multiple citations, different kinds of citations, etc.

Every headline is stored, along with its location, tags and properties. We can use the database to find headlines that are tagged or with certain properties. You can see here I have 293 tags in the database.

(emacsql org-db [:select (funcall count) :from tags])

 293

Here we find headlines tagged with electrolyte. I tagged some papers I read with this at some point.

(emacsql org-db [:select :distinct [files:filename headlines:title]
:inner :join tags :on (= tags:rowid headline-tags:tag-id)
:inner :join files :on (= headlines:filename-id files:rowid)
:where (= tags:tag "electrolyte") :limit 5])

 /Users/jkitchin/Dropbox/org-mode/prj-doe-early-career.org 2010 - Nickel-borate oxygen-evolving catalyst that functions under benign conditions /Users/jkitchin/Dropbox/bibliography/notes.org 1971 - A Correlation of the Solution Properties and the Electrochemical Behavior of the Nickel Hydroxide Electrode in Binary Aqueous Alkali Hydroxides /Users/jkitchin/Dropbox/bibliography/notes.org 1981 - Studies concerning charged nickel hydroxide electrodes IV. Reversible potentials in LiOH, NaOH, RbOH and CsOH /Users/jkitchin/Dropbox/bibliography/notes.org 1986 - The effect of lithium in preventing iron poisoning in the nickel hydroxide electrode /Users/jkitchin/Dropbox/bibliography/notes.org 1996 - The role of lithium in preventing the detrimental effect of iron on alkaline battery nickel hydroxide electrode: A mechanistic aspect

Here we see how many entries have an EMAIL property. These could serve as contacts to send email to.

(emacsql org-db [:select [(funcall count)] :from
:inner :join properties :on (= properties:rowid headline-properties:property-id)
:where (and (= properties:property "EMAIL") (not (null headline-properties:value)))])

 7452

If you want to see the ones that match "jkitchin", here they are.

(emacsql org-db [:select :distinct [headlines:title headline-properties:value] :from
:inner :join properties :on (= properties:rowid headline-properties:property-id)
:where (and (= properties:property "EMAIL") (like headline-properties:value "%jkitchin%"))])

 John Kitchin jkitchin@andrew.cmu.edu John Kitchin jkitchin@cmu.edu Kitchin, John jkitchin@andrew.cmu.edu

Here is a query to find the number of headlines where the deadline matches 2017. Looks like I am already busy!

(emacsql org-db [:select (funcall count) :from
:inner :join properties :on (= properties:rowid headline-properties:property-id)

 50

## 4 Keyword queries

We also store file keywords, so we can search on document titles, authors, etc. Here are five documents with titles longer than 35 characters sorted in descending order.

(emacsql org-db [:select :distinct [value] :from
file-keywords :inner :join keywords :on (= file-keywords:keyword-id keywords:rowid)
:where (and (> (funcall length value) 35) (= keywords:keyword "TITLE"))
:order :by value :desc
:limit 5])

 pycse - Python3 Computations in Science and Engineering org-show - simple presentations in org-mode org-mode - A Human Readable, Machine Addressable Approach to Data Archiving and Sharing in Science and Engineering modifying emacs to make typing easier. jmax - John's customizations to maximize Emacs

It is possible to search on AUTHOR, and others. My memos have a #+SUBJECT keyword, so I can find memos on a subject. They also use the LATEX_CLASS of cmu-memo, so I can find all of them easily too:

(emacsql org-db [:select [(funcall count)] :from
file-keywords :inner :join keywords :on (= file-keywords:keyword-id keywords:rowid)
:where (and (= value "cmu-memo") (= keywords:keyword "LATEX_CLASS"))
:limit 5])

 119

How about that, 119 memos… Still it sure is nice to be able to find them.

## 5 Full text search

In theory, the database has a table for the headline content, and it should be fully searchable. I found the database got a little sluggish, and nearly 1/2 a GB in size when using it so I am leaving it out for now.

## 6 Summary

The foundation for something really good is here. It is still a little tedious to wrote the queries with all the table joins, but some of that could be wrapped into a function for a query. I like the lispy style of the queries, although it can be tricky to map all the concepts onto SQL. A function that might wrap this could look like this:

(org-db-query (and (= properties:property "DEADLINE") (glob headline-properties:value "*2017*")))


This is what it would ideally look like using the org tag/property match syntax. Somehow that string would have to get expanded to generate the code above. I do not have a sense for how difficult that would be. It might not be hard with a recursive descent parser, written by the same author as emacsql.

(org-db-query "DEADLINE={2017}")
`

The performance is only ok. For large org files there is a notable lag in updating the database, which is notable because while updating, Emacs is blocked. I could try using an idle timer for updates with a queue, or get more clever about when to update. It is not essential that the updates be real-time, only that they are reasonably accurate or done by the time I next search. For now, it is not too annoying though. As a better database, I have had my eye on xapian since that is what mu4e (and notmuch) uses. It might be good to have an external library for parsing org-files, i.e. not through emacs, for this. It would certainly be faster. It seems like a big project though, maybe next summer ;)

Another feature this might benefit from is ignore patterns, or some file feature that prevents it from being indexed. For example, I keep an encrypted password file in org-mode, but as soon as I opened it, it got indexed right into the database, in plain text. If you walk your file system, it might make sense to avoid some directories, like .dropbox.cache. Otherwise, this still looks like a promising approach.