Adding search to your [ghost] blog - part 2
Okay this is the next step in adding search functionality to my blog.
The biggest let down of the original integration was that I need to remember to run the search updater every time I made a new, or just updated a, blog post, but what if I was writing the entry remotely or, more than likely, just plain forgot?
At the end of the last post I mentioned the possibility of adding a IFTTT snippet (actually they call them applets) to automate the process, so whilst the idea was still fresh I decided to investigate the idea.
Unfortunately I could not find a way for IFTTT to execute a node script when my blog changed but I could use it to send a web request to another service which could then execute a script when it received that request. For this second service I decided to use Azure functions because well I have some spare free credit per month and I might as well use it for something useful.
Setting up the Azure function
Creating an Azure function is relatively easy (assuming you have an account) but if you decide to use another service, e.g. AWS Lambda, then the below may provide hints about how you may implement similar functionality.
- If you don't already have one, create a "Function App" (Compute).
- Once created, Create a simple app that uses Javascript (node) and could be triggered by an API.
[I also chose to set the authorization level as Function as I prefer some level of access-control.]
- We need to pull in the
algolia-webcrawler
package, to do so upload thepackage.json
file below and runnpm install
from the Console. [I upload all the files into the same folder as the function I am creating. I also find the Kudu console works best for this as it displays the output/progress of the command.]
{
"name": "rss-ghost-algoliasearch",
"version": "1.0.0",
"dependencies": {
"algolia-webcrawler": "^1.0.3"
}
}
-
Next, upload your configured config.json file that is used to control the
algolia-webcrawler
. -
Now, replace the
index.js
file with the following snippet. [I did play about a bit here and finally decided thatspawn
was the method that worked best for me.]
var exec = require('child_process').spawn;
module.exports = function (context, req) {
context.log('HTTP trigger function processed a request.');
child = exec('node',
[__dirname + '/node_modules/algolia-webcrawler',
'--config',
__dirname + '/config.json'],
{});
child.stdout.on('data', function(data) {
context.log(data.toString());
});
child.stderr.on('data', function(data) {
context.log(data.toString());
});
child.on('close', function(code) {
context.log('[END] code', code);
});
res = {
// status: 200, /* Defaults to 200 */
body: ""
};
context.done(null, res);
};
- Finally, run the application to test that it works. [You can see the output in the Logs section.]
Setting up IFTTT
- Goto IFTTT and create a New Applet
- For the "IF" choose RSS (new feed item) and enter the URL to the rss feed of the blog e.g. http://blog.many-monkeys.com/rss/
- For the "THAT" choose Maker WebHooks and enter the url to the function e.g. https://functions-many-monkeys.azurewebsites.net/api/UpdateBlogSearchIndex?code=K2mp...
- Test the trigger works - this may take some time
Assuming everything above works without issue, then whenever your blog updates the IFTTT applet will be triggered, usually within an hour - docs say 15 mins) and the search index on Algolia updated.
Using Zapier
I am not sure what is going on with IFTTT but I found I couldn't rely on it to run when I updated my posts so I tried a similar service called Zapier instead. Creating a zap is just as easy as creating an applet, RSS->WebHook, and the free plan also states 5 (or 15 mins according to the free plan) between checks.
To make Zapier work though I had to use the option "anything is different" for the RSS trigger, so the issue with IFTTT may be related to how Ghost generates the RSS feed that is being consumed by the trigger.
I would appreciate any feedback and suggestions for further improvements.