Categories
VSCP

HELP – #VSCP UX Dreams #IoT #M2M

vscp_new_v2

If there was time…

… then I would have put more time on UX components for VSCP. YES the infrastructure is there. We where one of the first with a websocket interface to interface UX components. We have the REST interface and the tcp/ip interface. I would dare to say there is no system/framework available today that makes it easier to visualize result from sensors out there and also to interact with them in such an easy way. A few lines of code is all that is needed. Some demos are here (User:admin Password: secret).

But we (the VSCP world) don’t have users that have the skill or interest to take this further.  That users would do that  was my hope when I once set this up. That others, interested in user interface design, would take over the job. Needed because I am and will always be a low end developer doing drivers and other nitty gritty low end things. You need other skills for UX design then I have. It is a wast if I try to do this. Well it was a waste me doing the Javascript libraries. But someone had to do it so I did it.

I was disappointed of course.  This could have been great and a lot of fun to. Giving lots of credits to the ones that worked with it. Probably money to.

I have asked for help so many times and do so one last time.  So

 

Screenshot from 2016-03-10 10:42:01

 

Well there is plenty of things to do. No need to list them really as it is apparent that most UX components are missing. I will still try to come up with a list in order of importants what I would have liked to see being realized.

1.) Samples, samples, samples and more samples using the Javascript library , the websocket interface and the REST interface. Just to show what is possible and as a tool for others to build upon.

2.) More HTML5 websocket widgets.

3.) I am not a big fan of OpenHAB, “the big elefant”, but one thing I have always liked in it is the user interface component they came up with early.  The demo site is apparently down at the moment but here are some screen shots

openhab-demo-running-496x500

 

 

screenshot_openhab_windows

 

In my world this is typically something defined in an XML file and read by an app. which put up the interface. The XML file can be fetched from a VSCP daemon that is localized through its multicast interface or is pointed out by the user. The XML file can be served by the web server or through the websocket interface stored in a variable.

Inside the XML file there is a hierarchy of pages. Each page consisting of one or several lines.

Each line is a HTML5 element.  Looking at the switch above

Screenshot from 2016-03-10 11:03:15

a line can be single line (others can be multiline or page) and in this case a widget to the left (the lamp), the text in the middle (which also is a widgets, the statebutton to the right. So in this case the text may be static but the lamp widget show the actual dynamic state for a lamp or something and the switch actually do the toggle of that state.

Screenshot from 2016-03-10 11:07:02

A line like this have the same general components but the arrow just bring up a new page.

Screenshot from 2016-03-10 11:08:27

And a line like this has the same components. Here the temperature is shown dynamically but the iconic thermometer to the right could be used to show alarms etc or that could be done in the text.

Not a big thing really to implement on an Android, IPhone or desktop machine but something that takes time and needs a feeling for UI design to be usable.

They have a new version now which is demoed here and which dynamically adopt to the screen size.

4.) I have a quite similar project, Merlin, that is a drag a HTML5  component to a page and thereby adding active UI components to that page. I never get time to finish it as it looks.

floorplan_button_example

Live here (User:admin Password: secret).

 

A typical example shown here with active buttons places on a floor plan of a house.  Right click on the component to set it up. When done, generate the web page.

5.)The MDF (Module Description File) of VSCP defines a module and tells how it can be configured. The MDF nowadays also contains the possibility to set up wizards to help user to do specific setup tasks step by step in a generic way.

As all VSCP modules have information about this MDF in them they all have the ability to tell the world how they work, what they can do and how they should be setup to do just that. Yes gatekeepers prevent everyone from messing around with a module. But that is another issue which we leave out here.

An iBeacon or an EddyStone/physical web capable machine can send out the MDF URL for the world to see. So can a NFC equipped device do when taped.  So a machine that see this can get information that there is a device with certain capabilities available and it can from there open up a UI that  allow the end user to discover the unit’s capabilities and also configure it to do the things it is designed to do. This could be limited to just get information from the node or even status from it or to really go into it and be allowed to edit it’s configuration.

To understand this it is important to know that VSCP with it’s register abstraction model and the MDF allow for it’s  unique one application can configure all functionality.

So with this UX functionality a user of some equipment can set it up in a uniform way. He/she will be comfortable with the setup procedure from other equipment. But it can be used by a service technician to configure/diagnose something when he/she is out on the field. Of course it would also be possible to do this remotely equally good when needed.

A general setup/configure/presentation interface for VSCP  that is.  Target platforms phones/tablets/desktops.

7-9999).  Well they are there to of course. But we take them another day…

Ake Hedman
Maintainer of VSCP

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.