UPDATE on 12-May-2018: You can now directly test my work by cloning my repo, updating the source tree to my bookmark by
hg up ocs, making a build and calling
wiki_login from octave-cli to get a login token in return. I’ve added the files and necessary changes in the codebase itself. There’s no directory
If you happen to already have a build of octave, just do the following:
hg pull https://email@example.com/me_ydv_5/octavein your source tree.
hg up ocs
hg up -r 8bbf393
make -jXin your build tree.
This will save your time of cloning the entire repo and compiliing.
As mentioned in the previous post, I worked on __publish_wiki_output__.m and publish.m code. The __publish_wiki_output__.m has been added as an internal function in scripts/miscellaneous/private. I skimmed through the parser in publish.m to get a gist of how it actually works. It has three levels of parsing:
Extract the overall structure (paragraphs and code sections).
Parsing the content of a paragraph.
Generate the output of the script code and look after figures produced in code.
After that, I studied how the url-transfer.h file is implemented which contains a base class named
base_url_transfer which has a derived class named
url_transder. One thing that puzzled me while doing so was, why there has to be a macro
HAVE_CURL in order for
curl_transfer to be defined and why we haven’t defined
url_transfer class itself? I would try to get these doubts solved this week.
The problem of user agent was solved by selecting the following user agent:
GNU Octave/OCTAVE_VERSION (https://www.gnu.org/software/octave/ ; firstname.lastname@example.org) libcurl/LIBCURL_VERSION
where OCTAVE_VERSION and LIBCURL_VERSION correspond to the user’s octave and libcurl version respectively. This code precisely does the same for us.
My inteded plan for the wrapper is, make a cookie_manager.m file that will process the various user options (like verbose output, timeout settings, api.php url, etc.) and pass the values to an internal __curl__.cc function which will in turn, take help from libcurl_wrapper.cc to do various tasks (all the work related to cookies will be looked after by it, essentially).
Curently, all the code in wiki_login.m has been commented out except the first step of login, i.e, getting a login token from the api.php, which it smoothly does, as of now. I am assuming that the file which would store the cookies, is temporary and should be deleted once the session expires. This is one of the things I will be looking on in this week.
I’ve migrated all the developments from my forked git repo to my mercurial bookmark
ocs recently and so I was not sure where should I put the files in my source tree. Thus, I’ve put all of them in a directory
ocode for now.
To test this for yourself:
- Clone my build tree using
hg clone https://email@example.com/me_ydv_5/octave
- Make yourself a build of octave (
make -j2, etc.).
- Update to my bookmark using
hg up ocs. (IMPORTANT!)
wiki_loginin octave to get a login token.
All other details of the wrapper’s implementation will be followed in the next post.
The next week would follow the following advances:
- Choosing the right location for the files (after I get a green light for the current developmental path).
- Extending other options in the wrapper for wiki_login’s steps 2 and 3.
- Implementing cookie_manager with other user options.
- Writing help text and text cases, if any.
- Correction of existing work/ changing the stategy as adviced by mentor, or anyone else.
- Look into how can I use existing
base_url_transferclass in the wrapper and resolve my query of the
HAVE_CURLmacro and shared pointers, etc.
I am optimistic that I would be able to complete my first evaluation work by 25th May or so, as I will need to focus on my end term examinations after that which will start from 1 June. We don’t get holidays in between the exams!
Please let me know if I am doing it the right way or not, by either replying to this thread or by simply dropping a message on
<batterylow>. All the suggestions are always welcomed.
Oh and not to forget, I got my domain an SSL certificate, now all the requests are served via HTTPS only!
Stay tuned for next update.