Supporting material

Earlier posts dealt with technical issues (configuration vs content) and general matters (content vs its management). These are not the same. Now, we are venturing into operational issues that stump the practitioner.

Yet, many have worked out their process (see Hereditary Society membership, 2nd bullet). For the most part, these age-old techniques have ended up with information being duplicated, redundantly, in a manner that spawns what looks like a mess. For one, any of these point-in-time methods for storage will result in erroneous data. Case in point? Take an application at time 0. Let’s say the person, later, loses a spouse and remarries. Is the application updated? Or, suppose that later research shows some earlier information to be wrong. Are the applications updated? There will be some time after 0 when the information will be stale. That is the reality of computing.

So, in some cases, ramifications of new bits of data being available may even invalidate an application. What to do? Well, firstly, let’s all agree that it is a difficult situation. However, at the same time, genealogy ought to be vigilant in keeping processes up to date and effective. How so?

Well, we are discussing that. — So, before proceeding, let’s look at a case or two of recent experience. The situation involves a three-year stint where over fifteen applications were submitted and accepted (genealogically; in one cases, there were differences of opinion with regard to the ancestor’s suitability — in others, the proof of the application was considered sufficient, so why join? – as in, being verified by an independent party).

As well, the first application was with D.A.R. during which experience there was a trained genealogist helping with the structure. The leg-work was performed by the researcher (for the applicant). That effort was followed up, later, by supplemental applications to D.A.R. For the most part, applications have used a different ancestor and part of the tree (the goal was to verify main branches). The D.A.R. experience was sufficient to allow other the researcher to work all other applications. The researcher took no genealogy courses; rather, the researcher has years of experience of research and modeling (in an advanced computing context across a wide spectrum of domains).

So, the idea is to do this work, allow results to undergo scrutiny and, then, describe the experience (pending – see R.C. Anderson’s take on methods).

Now, the image, below, is a snap from the Word files that were used to print an application’s supporting material. That is, the application form (which can vary quite a bit between organizations) is not shown. The pages are marked source material that cogently links generations back to the 11th.

Again, let’s stop and consider the evaluation of this material. The expert who reviewed the application and the supporting material said that it was one of the best that the person had seen. At some point, this details of this application will appear at the Thomas Gardner Society, Inc. site.  The research effort for this application resulted in two articles that were published by the TEG (Vol 34 – see The Gardner Annals Vol 1, No 2). Application supporting material The image has four groups. On the upper left is the material for Generations 1 & 2. Then, we have material for Generations 3 & 4. On the lower part, we have the generations that are further removed. That first group consisted of 8 pages. Then, the second group had another 9. The other groups had 28 pages.

What we need to discuss in terms of modern handling are Groups 1 and 2. These are both semi-static, in nature. Group 2 is especially so, after deaths are recorded. However, both of these groups deal with sensitive information. That is, as technology rolls forward, we know more and more about a person. Query: why does a heritage society need to know the details of someone’s death – voyeurism? Hence, besides the issue of privacy, we see that Groups 1 and 2 are duplicated between applications (and, we’re sensitive to that due to shipping off packets of mostly duplicated material many times over in a short time frame). Group 2 may differ by whether the applicant is using the paternal or maternal tree, however the information is mostly static.

How might Groups 1 and 2 be handled in a more private, robust, and effective manner?

At the moment, the Thomas Gardner Society, Inc. is preparing a process for handling membership applications. This question is central to that process which will be defined here. Prior to that, let’s look at other examples which will be linked to this post.


Content and specifics

Let’s see, research and organization can make progress: marriage record of Thomas and Margaret. And, various attributes, and necessities, come to mind: consistency, persistence, …

Persistence? Yes, media wax and wane. How ought we handle this?

Earlier notes on content kicked the issues down the road, as we wrestled with technique (continual set of issues). So, now we can get to the matter.

WikiTree is an example. Ought we go that way, or do our own thing? Of course, the question presupposes that a context has been set. But, it has not. Hence, we’re merely getting the discussion going.


Content versus configuration

Again.

McLuhan’s little ditty can apply here: The medium is the message. Except, now, one might claim this: The configuration is the content.

Not!

Configuration has been of focus, of late. There are many things pending, like user rights.

Too, though, we will get into the philosophy of genealogy, methods, pedigrees, and a lot more.

Note, 06/22/2015: The Truth Engineering aspects will be coming to fore, more often, as we proceed. Note, please, the post from a few days ago about code. We are approaching the point where all rational/aware adults will have to grasp something about the underpinnings of the web/cloud that they have been using willy-nilly (as, how many even know what the providers are up to? …, and a whole lot of similar questions – from an old guy who was there from the beginning – almost).

Note, 09/06/2015: Quora will be a great asset for knowledge depositing.


Journal entries, technical and comments

I’ll use this post to keep a running summary. The background can be found in the prior post: Alpha/beta site (todo list at the bottom of the post; will be moved to this post).

Essentially, these entries deal with changing the website  using HTML/CSS, for now. There is one bit of javascript (more of this later, including a look at the different language options). Right now, I’m only working with the Sea Monkey editor and a simple text editor. Google has some edit apps now; I will try their HTML and CSS editors.

Aside: The world is moving fast; we all see technology running leaps and bounds. How do we make sense of all this? Does pulling the ostrich act cut it? Is exploiting the situations more than short term (oh yes, must be nice to roll in the dough, but there can be much discussion here about near zero and meta issues)? Ever think about truth engineering (essentially, knowing truth engines (all types) and using this knowledge)?

Historic list (will reopen when more modernization work is done) 

04/21/2015 — Did Google’s Mobile-Friendly test.

09/03/2014 — Redid the header and color scheme.

08/26/2014 – Almost done with another JQuery lesson (did one earlier, was not motivated to follow through – all sorts of philosophical issue to discuss – albeit eight years after the horse was let out of the barn) on Codecademy. Taking it easy; what’s the hurry? The world is too rushed. So, what might we do with manipulations of the DOM that is interesting from the points of view that we are trying to develop here?

07/31/2014 – All done except for the Sources page. Also, have a FB page for announcements, plus.

07/18/2014 – Wikipedia page references (for one thing, using Vol. III, No. 2) being updated. About Us and 400ths being reworked. … Main page changed (Beacon issues migrated to new format). … Almost ready to get back to content.

07/16/2014 — Pushed up changes to many pages; created new page, devlog.  At some point, will assess status of the pages (as in, count remaining ones to be done) and estimate some end date.

07/14/2014 — Clipping works with “position:absolute;” which will create tradeoffs, but it will work (that is, only use html/css). Will match this image (hopefully, position will look okay for most browsers). — Posted a little later, example banner in place for review. … WebGL looks very interesting and capable.

07/14/2014 — Filled in the table of Gardner’s Beacon issues. I like the column look by Volume. Earlier Volumes had more entries. … Started to look at using CSS shapes for the header/banner which is now an image. What comes up is clipping which works on images. So, started to look at Canvas and found this tutorial: 31 days of Canvas tutorials. Nice. The comments show that the “religious” wars of computing continue unabated. Perhaps, that drives the creativity. … Looking closer at the javascript approach, it’s tedium revisited (like 30 years ago?). Which then, brings us to newer efforts, such as WebGL. Per usual, the questions revolve around who can use and where (as in, we want to be able to support the widest range of browsers and users – as in, not following Apple’s exclusivity – where would Thomas sit in these arguments?). Official site for WebGl.

07/12/2014 — Added flyover menus on the navigator bar (About us –> Gardners and Gardners, etc.). Also, started the redo of Gardner’s Beacon (see Vol. IV, No. 2). For the collection, trying to use hover to provide a multi-select menu (issue, post, PDF). We’ll see how that goes. The titles are more visible. … Next up, there will be some redo of other pages and, then, a stop/review to see if some adjustments ought to be made before committing to total distribution. … Learned a lot. There has been a whole lot of activity by all sorts of folks. At first, I thought of it as much ado about not much (permutations, even if enumerated, do not say much). However, just using HTML/CSS provides a lot. … Which reminds me of the graphic redo (using shapes). That ought to be interesting.

07/10/2014 – After getting the CSS to settle down (lots to talk about here) and tying the test with the production, I went to look at fixing the image at the top. Also, started to look at menus. So, the fix was a simple one-line (z-index:10). Now, contents slide under the image. The footer fix did not require that (because it’s a div and not img?). On the menu, put in the text that will be in the menus. Thinking of having them on the side, invisible (float required?).

 


Alpha/beta site

Earlier, we talked about content management. Essentially, there are a couple (for now) sides of computing that are like a car user and its mechanic. The users go places with the car, show it off, and other things. Some users look under the hood. But, it’s the car guy/gal who looks under the hood.  Please note that we’re ignoring the car manufacturer, etc.

The point: use versus support. It’s an age-old thing in computing. Of late, a whole lot of the latter has been pushed off to the user (who is supposed to use helpers, to wit, call centers, chat rooms, and the like).

So, a few years ago, we were happily ensconced in OfficeLive. That went away, with prior notice. At the time, I moved over to what I was familiar with. HTML/Tables (with Sea Monkey’s, and a text, editor). That main page (version 1) is still there and worked for a long while. But, we need to get more modern, for several reasons. For one thing, a lot of new stuff has come about. Too, going forward, we will be adding functionality.

So, playing with the buttons seemed to be a good start. Why? In the move from OfficeLive, where the design was done with a modern WYSIWYG editor, I cut the buttons to images (see version 1) and used a table for layout adding links. It worked. So, that button work was done and has been distributed (except for some Gardner Beacon issues).

For now, the header image was cut from OfficeLive. I want to redo this using either shapes in CSS or javascript. However, why not start with the basics.

From now on, as I tweak the code (see disclosure, below), I will be using the website to test (see Alpha/Beta link from the main page). Of course, Alpha implies an earlier status than does Beta. Changes will migrate to the production site after they are thoroughly tested (against multiple browsers).

Aside: So, one thing that I learned was that turning on history, in the browser, allowed a lot of the mischief (poor decisions on the part of developers and original suppliers as they started to traipse on the rights of the users of the “cloud” or whichever computational framework you want to discuss ) that we see nowadays (more later – see disclosure). Has that story been told?

Note that in beta, there is only one page (the main one), however the links are active. Other pages will follow.

For now, there is no database, but, as we add that capability in, we’ll have to re-address the issue.

— Disclosure —

We mentioned Codecademy several places, such as we did in the post Does code matter? It was mentioned that the blogger has used over 50+ languages. Consider a timeline from the early mainframes to the smallest of the mobiles. That whole framework is spanned by two organizations to which the blogger belongs: ACM (computing society) and IEEE (advancement of technology). Too, the blogger was involved with the advanced portion from the beginning. Advanced edge? Yes, a rolling wave.

So, these last few innovations fit right within that framework in the mind of the blogger. He spent a little bit catching up with HTML/CSS (plus php, javascript). He had already used Python in several modes. Codecademy’s little system allowed a refresher look, so to speak.

But, too, the blogger ran across a site that claims to be on Time’s list of best in 2013. Say what? If the list contains more site like that, someone is not doing their homework. Or, they’re bedazzled by flash.

You see, the blogger’s experience is with real difficult problems including those where handling measure issues is imperative. A lot of what one sees now in web sites and applications is user interface. Or moving, showing data. Now, that last is okay as the blogger has worked database issues from the early types all the way to object types. As for communication, the blogger was involved in remote calls, synchronization, and similar things way early.

There is probably more to say here, however we’ll cease, for now.

Let’s close by listing some improvements expected to be coming soon. These are format, user issue. At the same time, the changes will propagate down the tree of pages. At some point, the Beta site will be forward looking rather than catch-up.

To do list (will be moved to the tech journal): position the header as fixed (float), create header image on the fly, use flex block (iframe), increase content on the footer, use cascade on menu items, Gardner’s Beacon (reorganize main page (menus), redo menu items, better links to PDF and blog posts, …), fix main nav menu, get blog (WordPress?) under TGS site, …

 


Rights for users of the cloud

Thoughts on the eve of the Fourth of July.

The recent revelation that Facebook (FB), and its educational cohorts, experimented with the on-line experience of FB users raises a whole lot of issues. Here is one WSJ take: Few limits.

What are the issues? Some mention “big brother” and such, out of fiction. I would claim that we are having a recurrence of the events that lead to the Magna Charta (799 years ago). However, the context is quite different, yet the same. Then, we had Barons who were interested in their welfare (which, by the way, was dependent upon countless serfs who lived lives of drudgery and worse) trying to rein in a King (who was their cousin, for the most part).

Now, we have humans who are trying to use a resource that was provided, originally, by the U.S. government, in a reasonable manner and who by doing such subject themselves to the machinations of immature males (for the most part), to the malfeasance of those who are misusing their abilities, and more.

How did this state of affairs come about? Is (was) it inevitable? We can (will) go into that.

But, the notions that were encapsulated in the Magna Charta are age-old (that is, thought about long before 1215). The promise of the internet goes right along with this dream. That is one reason to attempt to do it right. As well, we need maturity on both sides of the fence: providers and users. Such would advance the human race, as a whole.

We have 100 years or so of computational experience to draw upon. Too, this ought to be the opportunity to adjust the theoretic framework, as needed. The tasks involved will not be easy. There can be fun, though.

 


Made w/ Code

Google is sponsoring a site that is interesting: Made w/ Code Google. So, go over there, to that Google site, and read some important messages.

See the FAQ about their use of Blockly (a web-based, graphical programming editor) which is a nice touch. I often wonder what happened to things like Live Model (one of many examples; essentially, interfaces that allow system building through interactive diagrams) that allows one to get from the nuisance of code (yes, guys, that statement is by design and can be discussed further).

The TGS, Inc. site deals principally with historical/genealogical topics, however we are in a transition mode where technological issues are foremost. The technology of today is much different than that available to our forebears. Yet, we’re mostly the same people (memes issue might come into play here; we’ll get back to that).

So, we have been looking a web stuff, in part. As well, that brings up code (see Does code matter?) which ought to be of more interest that it is. People seem to have let the wizards have their day and way (see Wake up, people, it’s your right).

But is a whole lot more to think about than code, though one will still have one’s hands in the stuff. Truth engineering is one basis that has had less attention than it ought.

To wit: ancestry.com being down so long (with its secondary offerings – say, myfamily — being out for days).

Now, back to girls and coding. In my long years of working, I had cohorts that were female who were quite adept with computational matters. At the same time, these cohorts ran a gamut that guys usually don’t understand (or experience). In a sense, though there may be some issues related to Sheryl’s efforts, she is right on some of the gender matters.

 


Cyber-physical

We hear a lot about the cloud? What exactly is that? Opaque sink for data which can be used by whomever can obtain access (even if they have no right to the data)? A way for management to get something for nothing (their main objective, it seems) while impairing safety? That topic will continue to be of interest.

Code? We hear a lot about that, too. Code having some meaning in and of itself? Much to discuss there.

This month, ACM Communications (Vol. 57, No. 6) has an article titled: Cyber-Physical Testbeds. This is just a brief mention of an important topic. With the coming Internet of things, the issues here will be of even more importance.

We touched upon the topic about four years ago, in another context. So, the interest is there, especially when biological computing (one of many examples) is brought into the discourse.

 


Does code matter?

Well, I have already mentioned Codecademy’s approach and will get back to this.

Disclosure: I have been using their site to look at their lessons for 97 days, now. There are other sites of this type, but their setup  is nice to use to twiddle with a little code each day. But, I have a bunch of content to worry about, too, and, unfortunately, or fortunately, the world of content and code are disjoint (actually, the former – truth engineering would have us get those two better related).

This month, May, in the CACM, Bertrand Meyer has a little article, “Those who say code does not matter.” He mentions that he’s going to look at  the agile side, see what they have to offer, and get back to us. Who on the agile side is going to listen to the older folks, like myself? Say what?

My time using Codecademy, and playing with some stuff, has basically been for the purpose of seeing where things have gone the past 1/2 decade (at which time, I had left Lisp/C and  moved over to Python/Perl). Then, a need to focus on content came to fore.

Is it that the youth take over software (modern programmers) due to their energy? From my observations, this has happened with each succeeding generation with a measurable period. The real trouble is that the older folks let this happen. Well, what prevents things from unfolding?

Mind you, we’ll get back to this, the whole context has to do with the genie that was let out of the bottle when IP was let out into the world (wild west style – as in, like post-Jefferson’s purchase, a whole lot of area opened up for the taking, irrespective of the occupants at the time). Sure, many billionaires (millionaires) have ensued. Does that outweigh the negative impacts (to be defined — SAT solvers, and more)?

I will say that arguments about lack of people who can do the work (required by the new paradigms, such as webbiness or cloud’dness) is plain wrong. There are lots of older programmers about who can be trained. BTW, I was into objects in the very early stages and worked thusly for many moons. … It’s more a cultural thing (but how can one argue with the monied? as in, if you’re so smart, why aren’t you rich? — near-zero, people, near-zero as the reality) as USA Today reported: the silicon’d valley being a testosterone pot of mostly white males. I say, too, this: look around and you will see all sorts of havoc wrought by these ones (even if they have big pockets thereby) due to several lacks (which I am prepared to discuss, maturely and rationally).

Any way, back to code. The more who know, and are familiar with, code, the better. Code as a lingua franca? Well, not (but the point is arguable). I did know of someone who, in the ’70s, was allowed to use FORTRAN (what’s that, the young folks ask?) as a foreign language at a college. So, there’s nothing new there.

Put it this way: just as mathematics has been given more power than is actually there (a matter of being, folks – we have been overlaid insidiously – again, to be defined further) so too is code deficient in a whole lot of areas.

Later, …

Disclosure: 50+ languages (not counting DB systems, OSs, etc) used, in modes that contributed to oodles of projects on all sorts of platforms and under all sorts of user requirements including running the whole gamut (early AI applications –  way before agile – there, before it was cool, so to speak) to the more structured as shown by attaining SEI/CMMI Level 5.

 


Miscellany

There was a flurry of activity related to updating the button handling for awhile. Mostly, the time was spent getting up to speed. After that little bit of effort, and then making a decision about the approach, it was a matter of doing the work, testing, and then distributing the changes throughout the pages, as necessary (not complete as some of Gardner’s Beacon pages need some attention).

Now, a technological focus has its place: ought to be regularly held in about any modern context. The question, at the core, is how deep does one go. Well, the answer ought to be however much is necessary for truth (see Formally truthful). I don’t think that  a lot of modern programming cares about truth. Why should it when the way is to pound out stuff to see if it can rake in money (naturally, this is meant as metaphoric)?

In other words, we get to the cathedral/bazaar issue which was originally proposed in the context of Linux (see Queue). But, we can use it for the larger issue: so we have structure or fluidity (agility, too)? Why do those have to be incompatible (is my question)? Just today, I raised that issue in the other blog (Massachusetts Magazine). We will dance around the subject more but, for now, consider that there are truths that are more than the transient type related to computationalism.

As said, earlier, this blog is a learning vehicle for establishing WordPress on the TGS site in whatever way it can be used (at least, as the official blogging device). However, it may continue after that transition just to discuss, more fully, some fairly important issues related to computing and its foibles. I read the other day that even hard problems are being tamed through statistical means. One has to ask whether the whole accumulation of computational experience from its beginning (reminder, a mere 1/2 century ago) to some longer-term future point will really be sufficient to be called “knowledge” (my put: yes and no)?

A lot of time, of late, has gone into organizing processes and documenting such. Why? Various reasons that will become apparent, at some point. Right now, the emphasis seems to be on reviewing activity and benefits thereof which does deal with things beyond questions of: is it making money? That is a bazaar question. Sustainability, and its issues, must consider cathedral and such (to be discussed).

The Cathedral and the Bazaar (Wikipedia page) — written about development approaches, say the dichotomous views of top-down and bottom-up. Let me just say, that I’m going to talk middle-out as the scheme that we see having success. After all, those two views are neither sustainable in a real situation.

But, we can take those two (cathedral and bazaar – note lower case) and apply them to all sorts of situations which is what I am doing here, even though I used systems (software) examples. We could, eventually, think of a better pairing as juxtapositions, such as this, are everywhere visible to the observant.

We’ll use Eric’s site (here is a good starting page). On a quick read, we have to note that he is talking the type of code that is far removed from user content (to be discussed). There is a larger picture to consider; computing is for a purpose. In dealing with domain issues, users need direct involvement; in that sense, they ought to co-develop (so, tools, understanding the technical issues come into play).

There is no domain, of note, that is solely bottom-up (so remember, middle-out – it’s my duty to describe this further). In fact, the top-down (theoretical) rules, in many cases – unfortunately, so, since that view does inhibit (as in, you’re stupid for even thinking such a thing – ah, what hubris we see everywhere!). One problem is that domains have left things to the hackers, even in mathematics.

What is different about this old guy’s view? Well, he is over 70 and can still handle software. He touched a whole gamut of languages and approaches and platforms (enthralled with the new little toys? well, only in the sense that I foresaw those way back in the ’60s – it’s unfortunate that we seem to have to relearn a whole bunch of stuff – well, generations do come and go – what makes sense to one does not to the other – but, there are universals, even in computing – which generation will bring that out?).

I am always happy to look at the new books and note that there is nothing new under the sun. Now, in that context, the cathedral, at least, offers some semblance of continuity (let me take you to old structures from the 1100s, let’s say).  There are universals in computing. How do we lift these to consciousness and allow some agreement that is beyond the generational rifts?