so, I auto-renewed on November 16th, and given the current state of the website, I think we're getting ripped off at $40/year and low-functionality website and tech support going AWOL for months at a time... best case scenario of course is the new "we're getting everything under control" message from Kaleidoscope is legit, but I think one of my New Year's Resolutions is to pack my bags and prepare to bug out
to that end, wanted to sound out what tips people might have for how to do that as painlessly as possible (context: have thousands of wiki pages... also I'm a science geek working for an IT and Engineering-oriented company, so this might be a good excuse as any to accelerate my cross-training and learn some more coding)
worse case scenario: the site goes dark tomorrow because Kaleidoscope Global just ups and pulls the plug on us
recovery option: take the .xml file the current backup feature spits out... parse it back into a thousand text files, copy and paste them into a new platform
current action: making regular backups of my campaigns with the current feature, resolved to bug out if the backup feature becomes yet another broken feature on this site
needs to do: 1) figure out what the best way to approach automating reading the xml file and parsing it into separate text files equivalent to the old wiki pages, 2) figure out another venue which uses textile OR figure out best way to batch convert textile into something more widely used...
status: have tested out desktop wiki software (Zim), it doesn't do textile, tried desktop version of GitHub (which is mainly a software dev collaboration tool, but has a wiki functionality associated with it and speaks textile and has a desktop version... limited luck so far, but better than a GitHub competitor which says it reads textile but when I tested it out, not so much)
medium case scenario: the site continues to have poor functioning by November 2016
desired course of action: find a more civilized way to save and migrate everything... meaning probably learning some client-side scripting and writing my own code to pull every single wiki page off?
fall back method: manually doing this. ugh. like I said: thousands of wiki pages
best case scenario: the site's functionality gets fixed and have confidence is restored
desired course of action: probably still want to learn the right client-side scripting so I can *upload* batch generated pages (from offline Bash scripting) to my wiki, not just download them as described above