People's Newsroom

MOBILE APP BUILDERS AND SENSORS AUGMENTATION

The authoring tool provides end-users with the capability of reusing existing content, but not just taking it as a base for the augmentations; it also allows them to extract it from external Web pages, usually third-party ones, to be injected into a new context. For example, users can take the actors’ profiles at IMDB as target Web pages, and augment them with a carousel of related trailers from Youtube videos when the device is in landscape orientation. The content extraction is the responsibility of a common component, available for every builder, named External Content Extractor. Such component is instantiated in the privileged context of the browser extension, so it makes it possible to append extra behavior in any Web page, enabling user interactions for selecting their DOM elements of interest. This also makes it feasible for the manipulation of every DOM element to obtain its positioning data (in the DOM, e.g. the XPath) and dynamically consume their content from external contexts (other Web pages that do not share the same domain name).

In order to persist the data entered by the user and keep it during different browsing sessions, we decided to use the data model supported by the XSLTForms engine. XSLTForms is a client-side implementation of XForms, whose benefits were extended by adding sub-forms management that allows us to easily support the authoring process in a wizard mode. The input values in XSLTForms are automatically bound to a data model that, at the end of the process, we can export and use as the specification of a MoWA authored application. Back to the builders, they must be a subclass of ConfigurableComponent; it states that every builder should be capable of –at least– carrying out an authoring process, persisting and validating the data entered by the end-user.

Nevertheless, there is a need for a common interface between the builders (e.g. MobileAppBuilder) and the artifact to be authored (e.g. MobileApp), and that justifies the existence of components (e.g. MobileAppCom). In our model, components play the role of homonym name in the Decorator pattern, and each one of them inherits from ConfigurableComponent. All the components in Figure 6 are the darker classes with a name ending in «Com», and all the builders are the lighter ones ending in «Builder». Each builder is responsible for defining the backbone of the part of the authoring process it is in charge of. For example, the MobileAppBuilder. At this point, it is enough to understand that the MobileAppBuilder is the most important builder; it is in charge of orchestrating the full process, and delegating tasks to more specialized builders, like the ones in charge of configuring the context values (DimensionalSpaceBuilder) or the augmenter layers (ALayerBuilder) and their augmenters (AugmenterBuilder). In the beginning, this builder asks for the basic information of the application (e.g. name) and then, in the following step, it presents the end-user with a series of context types supported by the Sensors.

Consider a scenario where the location was selected as one of the target context types and the GPSSensor as the only sensor to use. The application can delegate part of the authoring process to any DimensionalSpaceBuilder subclass (e.g. 2DMapBuilder), in order to set the context values of interest for an AugmentationLayer of the MobileApp. So, following the example, the user is presented with a map, so he can define on it some markers matching a concrete coordinate. Another part of the process is delegated to the ALayerbuilder and it does the same with its AugmenterBuilders. When configuring the layer and augmenters, as the previous steps already gathered all the required data, it is possible to run the application and preview the augmenters while configuring them. The last step requested by the MobileAppBuilder requires the definition of a context rule that is materialized as the subscription of the authored layer to a concrete sensor. In each step, builders should persist the data entered by the end-user, so at the end of the process, it is not just possible to run the application but to export the full data model. 

As enabling user interaction with the authored artifacts may entail errors at execution time, MoWA Authoring components also support mechanisms for displaying extra messages to the end-user in case an authored object was misconfigured. For example, if a map contains a wrong coordinate value, if the link of the image selected as a 2D floor plane is broken, or if an augmenter lacks a required parameter value. Achieving this requires that any ConfigurableComponent may be capable of checking the arguments it is receiving and be capable of displaying proper and useful messages to the producer. Such class has an abstract method intended to that end: the checkInputParameters, which is executed before saving the configuration of a component (while in authoring mode), and before execution time (while in authoring or regular execution model with the weaver).

Broadly speaking, we introduced new kinds of interactions to the framework components, like the ones required for managing the dimensional spaces. For instance, in the case of 2D floor plans, we used Leaflet Maps (in conjunction with OpenStreetMap), which provides a complete API for interacting with maps from both, mobile and desktop environments in a light way. Thus, we support two interaction modalities with maps: data visualization and authoring mode. In the first case, augmenters can use it for displaying the positions of the user and some PoIs, while in the second case it allows end-users – through a concrete builder– to create, move, delete and connect markers in a map, and also to configure the map’s zoom through touch events.

It is worth mentioning that there are no restrictions about the functionality for extending the builders. It is possible to contemplate manual mechanisms, where the user explicitly inputs the information, but also more sophisticated ones that autocomplete such data. For instance, we added a mechanism for facilitating the in-situ development modality to enable the possibility to add a marker at the user’s current position. When the user needs to configure some Location values of interest for the app, he can:

  • insert and position it by tapping a special button with a positioning icon in the interface hold on the map to insert the marker and then drag it to his desired position. The addition of such functionality also facilitates the validation of functional requirements, since it is the same person who sets out the requirements for the application the one that builds it under the same context in which it will be used.

Finally, we also adopted a definition of language bundles for each building artifact for internationalization purposes, so the MoWA engine is able to provide the authoring experience according to the user preferences or the browser’s language. We provide language bundles of Spanish, French, and English, and this allowed us to invite a broad spectrum of participants to our experiment; in fact, we conducted the experiment in the facilities of our laboratory in Argentina, and we had two participants in the experiment whose mother tongue was not Spanish. Nevertheless, they opted to create their application in Spanish.

THE OVERALL CONTROL FLOW

For supporting the authoring process of a mobile Web application (the create Mobile Web app use case in Figure we started by defining it as a series of stages.

These stages are:

  • Setting the application base data. Here, the tool asks the user the base information for the application, like a name, a namespace, a filename, etc. The builder is in charge of getting the values that the end user should provide; in this case, just the application name. The rest of the required data is transparent for the end user.
  • Selecting context type(s). Here, the tool presents the end user with some context types, so he can choose among them and use these in his application. Examples of context types are the user’s location, the device’s orientation, the noise level, the light level or the time.
  • Select context sensor(s). Then, for each selected context type, the user is offered with a set of available sensors for listening for their changes. For example, both a GPS sensor and a QR sensor, could be on charge of sensing the user’s location; a Lux sensor can notify about changes in the level of light perceived by the mobile device; a dB sensor can track changes in the noise level in the ambient.
  • Define context values of interest for the application. Every sensor notifies changes of a context type to the subscribed augmenter layers (that will be defined in the next stage), but in order to support such layers to use the sensed context values, it is required that the application knows which values are representative for her purposes. For example, as the end user is building a pure mobile application, the application needs to essentially know a set of locations for triggering the augmentations. Such locations are represented as Points of Interest and they have some optional properties that can be specified; for example, external content related to every point of interest or the specification of a navigability order through the set of PoIs. In the same way, an application subscribed to a Light sensor needs to know what are the bounding values that represent a significant change in the light level, and optionally it could be defined some associated data to every light level, as a default description of it. Therefore, every application is capable of showing a set of configurations according to the selected sensors.
  • Create augmentation layers. At this stage in the process, our tool asks the user to define a layer, and a set of configured augmenters for each Web page to augment. Concerning the first issue, there are two options for this purpose: the first one lets the end user to define a pattern that will be evaluated against the current URL in his browser, and the second one let him to select a concrete URL to open when a sensor notifies a layer to be executed. Concerning the augmenters, we provide the producer with a set of those artefacts according to the sensors he has chosen; we suggest them based on a simple tagging mechanism, defined as metadata in the augmenter’s class file. Augmenters are defined in the context of a layer, and the producer can add as many as he wants. Each augmenter needs some input values to be properly executed. We provide the end user with three alternatives for defining such parameters’ values
    • he can reuse the defined data related to a concrete context value –e.g. the PoI name– by accessing the data model and selecting a property.
    • he can manually input such data in the form; or 3) he can use an assistant for retrieving external content.
  • Define context rules. Creating context rules is a transparent process for the end user, who simply must set the augmentation layers he has created as concrete observers of –at least– one of the selected sensors. The user is presented with the list of selected sensors and a set of augmentation layers, so he needs to specify which layers will be executed when a change in a concrete sensor happens. For instance, if he is using a GPS location and a Light level sensor, and he wants to execute different augmentations for each of them, he should create two augmentation layers with the desired augmenters for each of them, and finally match every layer with each sensor. In regard of the context rule composition, the event is represented by the sensor changes; the condition is the comparison of the sensed context value against the ones defined in stage 4; the action is the execution of a concrete augmentation layer.

Concerning the demographic and device data, and their relation with the level of completed tasks, we could observe that the higher values of standard deviation among the averages representing each category are related to the device platform (13.41%), the participant’s level of studies (37.42%) and their area of interest (27.3%). This means that there was a greater degree of dispersion from the average value of the category. For example, while degree students (even the ones that abandoned the career), and degree and post-degree graduates achieved a level greater than 80%, the high school graduates’ category has a 26.11% and the one for Ph.D. students a 61.11%. However, drawing conclusions from these numbers might not be appropriate since four categories were represented by a single participant each (including the two of lowest results). In the case of the participant’s area of interest, we can also observe that the ones belonging to Hard Sciences, Social Sciences, Naturals Sciences, and Arts completed more than 80% of the process, while the one from Economics did the 50% and the one with no interests achieved the 26.11%. Here, just one single participant represented these last two categories. Concerning the used device, lower values are related to the  Pro Light (63.06) and Moto E (73.56%) devices. In the first case, we could not observe some striking differences from other platforms. Nevertheless, in the second case, we observed that participants had a problem when sensing the QR codes, and the reason was that such a phone model had a camera with a fixed focus. This could influence both, the amount of work done and the participants’ motivation. The lack of ability to focus made it hard the task of scanning QR codes, because at a close distance to it the captures were blurry and could not be properly interpreted by the decoding library.

Finally, we could also observe some complications related to the use of some UI elements. First, we used the XForms repeat object to create and list the properties of a Point of Interest, and many participants had some trouble interpreting this distribution on a small screen. A second problem we observed was related to an expected interaction by the users against the possible one. In concrete, we identified a problem in the interaction for inserting a new augmenter in the target Web page. As the contexts of the bar with the list of augmenters and the Web page content were different, and also because of the processing limitations of a mobile device, it was hard to simulate a drag and drop among both contexts. Instead, we implemented the insertion with a tap in the list of augmenters, and then, when the augmenter is inserted in the Web page context, the user was able to drag and drop the thumbnail.

Sometimes, users have needs that can be solved by combining existing Web content and services retrieved from different sources, and Mashups may represent a possible solution. CAMUS, a framework for designing mobile applications through visual composition and high-level visual abstractions. It integrates and provides resources according to different contextual situations. It involves different user roles: an administrator with technical knowledge for registering resources in the platform and mapping them with context elements; a designer who defines how to mashup the services and to visualize the information; and a final recipient of the authored application. There is no mention of the adopted programming technique, but the provided screenshot suggests a WYSIWYG approach. CAMUS is aimed at creating context-aware mobile or Web-based applications. Nevertheless, the authors mention that execution engines are created as native applications for different mobile devices.

In the Mobile field, presents MobiDev, an Android-based development platform for mobile application creation from mobile devices. It allows the user to create the graphical interface by writing source code, designing mock-ups with a visual editor, or drawing a sketch on paper and taking a picture, so the system will analyze and interpret it to generate a visual design. The approach contemplates end users with no programming skills developing applications with a basic control flow, but also developers defining a more specific behavior through JavaScript code. The approach was evaluated with 16 students belonging to the Computer Science department, who were previously trained. The experiment was successful, but the requirements did not contemplate the use of mobile features.  presents a native platform for enabling end-users to compose native mobile applications from their own mobile devices, by integrating the mobile features provided by the same device but also from Web services. Users specify activities through visualization components, which are executed by an interpreter that automatically creates the user interface. The approach comprises a repository for enabling end-users to share their productions and reconfigure them. The research team evaluated their approach with 40 students of the first year of the Computer Science career, and it was a requirement that they have no programming skills. Concerning the prior training, they give participants a lesson of 20 minutes about their tool, and another extra 20 about another similar tool against which they compared results.

Puzzle is a EUD framework for producing native but Web-based applications, targeting touch-based platforms. Users combine building blocks through a puzzle-based metaphor with color-equipped corners for pinpointing its combinability, which makes it suitable for end users with no programming skills. The authors mention that there is no need for plugins, but they end up presenting an implementation in the form of a native Android application. Diverse multi-purpose combinations can be created, changing the application’s logic. A repository of created artifacts is available, but there is no way for users to request the construction of a concrete application to the crowd. Puzzle was evaluated with 13 participants with no IT-related jobs, and they were not exposed to previous training. The authors propose an approach towards EUD for multi-device mashups creation with composite resources. They implemented a framework and a UI-centric tool using the WYSIWYG technique. Users should select among the existing data components and UI templates in the repositories, and then perform an association between data items and visual elements. Finally, a platform-independent schema is generated and saved in the platform repository, so the user can download it from the supported platforms and execute it through a native engine. Diverse multi-purpose combinations can be created and the design environment is a Web application, but they recommend not to execute it in mobile devices but in larger-screen ones. The experiment was conducted with a 10-minute demonstration and 36 participants, 17 of them with programming skills.

Other approaches empower the end-users with the capability of creating Mobile Web applications from desktop environments. For instance, allows the user to create widgets –in the meaning of simple applications– that represent a specific Web interaction. These artifacts, called Tasklets, are created using Programming By Example (PBE). Users need to install a plug-in in their desktop Web browser and use it to record the sequence of steps required to perform the task. This tool saves the need for representing these steps, builds a Tasklet template, detects and defines potential parameters, and finally makes the script accessible for multiple platforms through their repository. A wide spectrum of Tasklets could be created and shared for both, personal and public consumption.

Another related approach for EUD from desktop environments is presented, where the authors present a cloud-based development platform of context-aware mobile services to be consumed as native applications. The platform is accessible through a web-based application, where the producer can associate a set of context values (specific locations, areas, times, dates, etc.) with concrete information to be delivered to the clients meeting such conditions. The authors conducted an experiment with 10 tourism domain experts with no technical skills and no prior training, and they did it from a pre-installed native application. The resultant applications are bounded to the information delivery purpose.

Back to top button