Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR GENERATING WEB CONTENT FROM DESIGN FILES
Document Type and Number:
WIPO Patent Application WO/2023/023667
Kind Code:
A1
Abstract:
The present disclosure provides systems and methods for converting design files containing static visual content into dynamic and interactive or animated visual content. The system may receive or create a design file that includes a plurality of visual content elements, such as images, textual content, graphics or icons, or combinations thereof. The system identifies a visual content element of the plurality of visual content elements as being actionable and determines an action for the actionable visual content element(s). The action may support interactivity with or animation of the actionable visual content element. The system generates actionable content based on the actionable visual content element and the one or more actions, such as code configured to execute the action and/or display the actionable content via a presentation medium, and then publishes the actionable content to the presentation medium. The code may be generated automatically based one or more code libraries.

Inventors:
BREEN WILLIAM RYAN (US)
VERNEREY ALLISON LAURE (US)
KASSIM MAHER (US)
Application Number:
PCT/US2022/075246
Publication Date:
February 23, 2023
Filing Date:
August 20, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ZMAGS CORP (US)
International Classes:
G06T13/00; G06Q30/02
Foreign References:
US20090158317A12009-06-18
US20140358828A12014-12-04
US20130298001A12013-11-07
US20080303826A12008-12-11
US20190197089A12019-06-27
Other References:
EMMANUEL CECCHET ; ANUPAM CHANDA ; SAMEH ELNIKETY ; JULIE MARGUERITE ; WILLY ZWAENEPOEL: "Performance comparison of middleware architectures for generating dynamic web content", SAT 2015 18TH INTERNATIONAL CONFERENCE, AUSTIN, TX, USA, SEPTEMBER 24-27, 2015, SPRINGER, BERLIN, HEIDELBERG, 1 January 1900 (1900-01-01) - 20 June 2003 (2003-06-20), Berlin, Heidelberg , pages 242 - 261, XP058033082, ISBN: 3540745491
Attorney, Agent or Firm:
BRAXDALE, Allan (US)
Download PDF:
Claims:
25

CLAIMS

1. A system for generating and publishing interactive and/or animated content from a design file, the system comprising: a memory storing one or more code libraries; and one or more processors configured to: receive a design file comprising a plurality of visual content elements, the plurality of visual content elements including image content, textual content, or both; identify at least one visual content element of the plurality of visual content elements as an actionable visual content element; determine one or more actions associated with the at least one actionable visual content element, the one or more actions corresponding to actions to enable interactivity with respect to the at least one actionable visual content element, actions to occur in response to interactivity with the at least one actionable visual content element, or both; and generate actionable content from the design file based on the at least one actionable visual content element and the one or more actions, wherein the output actionable content comprises code configured to execute the one or more actions and code for displaying the actionable content via one or more presentation mediums, wherein the code configured to execute the one or more actions and the code for displaying the actionable content are automatically generated based on the one or more code libraries; and publish the actional content to at least one of the one or more presentation mediums.

2. The system of claim 1, wherein the actionable content includes a subset of the plurality of visual content elements of the design file.

3. The system of claim 1, wherein the at least one actionable visual content element comprises a carousel display associated with a plurality of images.

4. The system of claim 2, wherein the actionable content comprises the plurality of images and two icons, and wherein the one or more actions comprise rotating a set of images of the plurality of images that is displayed to a user in response to activation of one of the two icons.

5. The system of claim 1, wherein the one or more processors are configured to associate a universal resource locator (URL) corresponding to a web page with the at least one actionable visual content element, and wherein the code configured to execute the one or more actions is configured to initiate presentation of the web page corresponding to the URL in response to activation of the at least one actionable visual content element.

6. The system of claim 1, wherein the code for displaying the actionable content is configured to switch between displaying a first visual content element of the plurality of visual content elements and a second visual content element of the plurality of visual content elements.

7. The system of claim 6, wherein switching between the first visual content element and the second visual content element, wherein the first visual content element is not displayed simultaneously with the second visual content element and the second visual content element is not displayed simultaneously with the first visual content element.

8. The system of claim 1, wherein the one or more actions comprise a pulse action associated with a first visual content element of the plurality of visual content elements.

9. The system of claim 8, wherein the pulse action modifies a presentation of the first visual content element.

10. The system of claim 9, wherein the code configured to execute the one or more actions is configured to loop the pulse action with respect to first visual content element.

11. The system of claim 1, wherein the actionable content comprises a composite representation of at least one of the plurality of visual content elements of the design file, the composite representation of the at least one visual content element comprises a grid having at least two sections, each grid section comprising a portion of the at least one visual element, and wherein the code for displaying the actionable content via one or more presentation mediums is configured to control presentation of the sections of the grid such that the at least visual element is presented in the one or more presentation mediums in a manner that replicates presentation of the at least one visual element in the design file.

12. The system of claim 11, wherein the code for displaying the actionable content via one or more presentation mediums comprises hypertext markup language (HTML) code.

13. The system of claim 12, wherein at least a portion of the code configured to execute the one or more actions is embedded within the HTML code.

14. The system of claim 1, wherein the one or more code libraries comprise pre-determined code segments configured to provide the one or more actions.

15. The system of claim 1, wherein the one or more processors are configured to display a graphical user interface (GUI) user, and wherein the GUI provides functionality for modifying one or more visual content elements of the design file.

16. The system of claim 15, wherein the GUI provides functionality for configuring parameters of the one or more actions.

17. The system of claim 16, wherein the parameters include a parameter configured to control looping of an action, and wherein a first value of the parameter indicates the action should be looped and a second parameter value indicates the action is not to be looped.

18. The system of claim 1, wherein the one or more presentation mediums comprise a web page, an application, a social medium platform, or a combination thereof.

Description:
SYSTEMS AND METHODS FOR GENERATING WEB CONTENT FROM DESIGN FILES

PRIORITY

[0001] The present application claims the benefit of U.S. Provisional Patent Application No. 63/235,630, entitled “SYSTEMS AND METHODS FOR GENERATING WEB CONTENT FROM DESIGN FILES” and filed August 20, 2021, the content of which is incorporated herein by reference in its entirety.

TECHNICAL FIELD

[0002] The present disclosure generally relates to systems for creating web content and more specifically, to systems and methods for dynamically transforming design files into interactive and/or animated media.

BACKGROUND

[0003] Access to the Internet has become widespread and with the proliferation of Internet capable mobile devices, has become accessible from almost anywhere. As a result, use of the Internet has become deeply integrated into many aspects of daily life. For example, the Internet is one of the primary mediums for communication, such as e-mail communications, voice communications (e.g., voice calls over Wi-Fi, voice over internet protocol (VoIP), etc.), and other forms of communication. The Internet also plays a vital role in how people buy goods and services (e.g., e-commerce), distribute and consume information (e.g., blogs, online news sites, and the like), and access entertainment (e.g., streaming music, movies, and the like). As a result of the increased accessibility, ease of use and robust capabilities, the Internet has developed into a robust and competitive landscape in which the most valuable resource is the Internet user. To compete for the attention of Internet users entities often utilize designers (e.g., graphics designers or other artists) to design appealing content for display to the users. The designers utilize specialized software programs, such as Canva, to create visually appealing displays of text, images, and other forms information. These software programs output a design file that contains all of the visual content created by the designer, but such content is not readily adaptable to other mediums. For example, a single design file may include visual content for one or more pages of a website displayed in a single viewable space.

[0004] To create a more engaging experience for the users, entities provide the design file to a web developer who is tasked with transforming the designers’ work into engaging web content. In particular, the web developer recreates various pieces of the visual content included in the design file in a format suitable for presentation as a web page and builds hypertext markup language (HTML) and other forms of code (e.g., JavaScript) to support the functionality of the web page or website being built. The need to use both a designer and a developer can slow down or delay deployment of web content, and may even result in a loss of some of the creative expression of the designers’ work (e.g., based on the recreation of the visual content by the web developer). Additionally, existing techniques for generating web content may also result in decreased performance when loading the web content due to the manner in which the web developers reproduce the content of the design file.

SUMMARY

[0005] The present disclosure provides systems and methods for converting design files containing static visual content into dynamic and interactive or animated visual content. The system may receive or create a design file that includes a plurality of visual content elements, such as images, textual content, graphics or icons, or combinations thereof. The system identifies a visual content element of the plurality of visual content elements of the design file as being actionable, such as visual elements that are designed to be clicked on by a user or animated. The actionable visual content elements may be identified based on graphics or shapes (e.g., identifying shapes or graphics representing buttons or other common user interface elements) or may be designated by a user.

[0006] Once the actionable visual elements are identified, the system determines one or more actions for the actionable visual content elements. The actions may support interactivity with the visual elements, such as scrolling through a carousel display, clicking on a button, etc., or animation of the actionable visual content element, such as to pulse a visual content element, automatically rotate which visual content elements are presented to the user (i.e., without user interaction with the elements), etc. The system generates actionable content based on the actionable visual content element and the one or more actions. For example, the actionable content may include a representation of the visual elements to be displayed to a user and code. The code may include code configured to execute the action(s), such as JavaScript or other types of code. The code may also include code configured to control display the actionable visual content via a presentation medium, such as HTML for controlling how the visual content is displayed in a web page. In embodiments, the code is generated automatically based one or more code libraries and may leverage pre-defined functions or pieces of code to control generation code to support actions, animations, and the like. Once the actionable content is generated, the system then publishes the actionable content to a desired presentation medium.

[0007] The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims. The novel features which are believed to be characteristic of the invention, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] For a more complete understanding of the disclosed methods and apparatuses, reference should be made to the implementations illustrated in greater detail in the accompanying drawings, wherein:

[0009] FIG. l is a block diagram of an exemplary system for dynamically transforming design files into interactive web content according to aspects of the present disclosure;

[0010] FIG. 2 is a block diagram illustrating an exemplary graphical user interface supporting conversion of design files into interactive visual content in accordance with the present disclosure;

[0011] FIG. 3 is an exemplary design file that may be converted to interactive content in accordance with the present disclosure;

[0012] FIG. 4 is a block diagram illustrating aspects of generating interactive content from a design file in accordance with the present disclosure; [0013] FIG. 5 is a screenshot of an exemplary web page generated in accordance with aspects of the present disclosure;

[0014] FIG. 6 is a block diagram illustrating additional experience and interactivity controls for generating content from a design file in accordance with the present disclosure is shown as a GUI;

[0015] FIG. 7 is another block diagram illustrating additional experience and interactivity controls for generating content from a design file in accordance with the present disclosure is shown as a GUI;

[0016] FIG. 8A is a block diagram illustrating aspects of converting a design file into an interactive display;

[0017] FIG. 8B is another block diagram illustrating aspects of converting a design file into an interactive display;

[0018] FIG. 9 is a block diagram illustrating layers of visual content in design files in accordance with the present disclosure;

[0019] FIG. 10 is a block diagram of an exemplary web page generated in accordance with the present disclosure; and

[0020] FIG. 11 is a flow diagram of an exemplary method for dynamically transforming design files into interactive web content according to aspects of the present disclosure.

[0021] It should be understood that the drawings are not necessarily to scale and that the disclosed embodiments are sometimes illustrated diagrammatically and in partial views. In certain instances, details which are not necessary for an understanding of the disclosed methods and apparatuses or which render other details difficult to perceive may have been omitted. It should be understood, of course, that this disclosure is not limited to the particular embodiments illustrated herein.

DETAILED DESCRIPTION

[0022] Referring to FIG. 1, a block diagram of an exemplary system for dynamically transforming design files into interactive web content according to aspects of the present disclosure is shown as a system 100. The system 100 is configured to support operations for creating design files and converting the design files into interactive content that may be presented to a user, such as interactive content presented on a web page. During the conversion process one or more types of code may be generated to facilitate, at least in part, the interactivity of the content generated through the conversion process, as described in more detail below. Using the conversion processes and techniques for generating interactive provided by the system 100 enables interactive content to be generated more rapidly from design files and without requiring recreation of the visual content of the design file and without requiring the use of a programmer (e.g., a web developer, etc.).

[0023] As shown in FIG. 1, the system 100 includes a content generation device 102, one or more user devices 140, and one or more designer devices 150. The designer device(s) 150 includes one or more processors 152, and a memory 154. It is noted that the designer device(s) 150 may also include other components and devices, such as a display device, input/output (I/O) devices (e.g., keyboard, mouse, stylus, scanner, universal serial bus (USB) ports, and the like), or other types of hardware and devices to support operations for creating a design file. As explained above, the design file includes visual content that depicts a designer’ s vision of content to be deployed to the Internet, such as a design for a website or one or more Web pages of the website, or some other type of visual content. An exemplary design file for generating interactive content in accordance with the present disclosure is described below with reference to FIG. 3.

[0024] Once generated by the designer device 150, the design file may be provided to the content generation device 102 where it is processed using automated and/or semiautomated functionality to transform or convert the design file into interactive content suitable for deployment to the Internet or other mediums. For example, as described in more detail below, the functionality provided by the content generation device 102 may be configured to generate HTML, JavaScript, or other forms of code that convert the content of the design file into an interactive Web page.

[0025] As shown in FIG. 1, the content generation device 102 includes one or more processors 104, a memory 106, and one or more interfaces 130. The one or more processors 104 may include one or more microcontrollers, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), central processing units (CPUs) having one or more processing cores, or other circuitry and logic configured to facilitate the operations of the content generation device 102 in accordance with aspects of the present disclosure. The memory 106 may include random access memory (RAM) devices, read only memory (ROM) devices, erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), one or more hard disk drives (HDDs), one or more solid state drives (SSDs), flash memory devices, network accessible storage (NAS) devices, or other memory devices configured to store data in a persistent or non-persistent state. Software configured to facilitate operations and functionality of the content generation device 102 may be stored in the memory 106 as instructions 108 that, when executed by the one or more processors 104, cause the one or more processors 104 to perform the operations described herein with respect to the content generation device 102, as described in more detail below. Additionally, the memory 106 may be configured to store one or more databases, such as databases storing code libraries (e.g., libraries of executable code or code that can be compiled into executable code) for routines supporting operations of the content generation device 102 or other types of information.

[0026] Additionally, the memory 106 may be configured to store one or more design files 110, such as design files received from the designer device(s) 150. In some aspects, the content generation device 102 may also provide functionality similar to the designer devices 150 that may be used to create design files for use in generating interactive content in accordance with the concepts described herein. As shown in FIG. 1, the design file(s) 110 may include visual content 112, such as text, images, icons, and the like, as well as metadata 114. The metadata 114 may include information about various elements of the visual data 112 of the design file 110, such as information describing portions of the visual data 112 corresponding to images (e.g., image name, image size, etc.), interactive user interface (UI) elements (e.g., buttons, check boxes, radio buttons, dropdown menus, and the like), layer information, text (e.g. strings of alpha numeric characters, , font, font size, font style, etc.), or other metadata information.

[0027] As a non-limiting an illustrative example, and referring to FIG. 3, a screenshot showing an exemplary design file for generating interactive content in accordance with the present disclosure is shown as a design file 300. As shown in FIG. 3, the design file 300 includes a plurality of visual content element regions 302, 304, 306, 308, 310, 312, 314, 316, 318, and 320. Some of the visual content element regions include textual visual elements 303’, 305’, 307’, 309’, 311’, 313’. Other textual visual elements are also shown in FIG. 3 but are not explicitly labeled for clarity of the drawing. Additionally, the plurality of visual content regions include image content, such as images 313”, 315’, 317’, as non-limiting examples. Other image content is also shown in FIG. 3 but not explicitly labeled for clarity of the drawing. In addition to textual visual elements and image content, the design file 300 also includes icons, such as icons 311” and 317”. Other icons are also shown in FIG. 3 but are not explicitly labeled for clarity of the drawing. As will become more clear in the description below, the visual content in each of the plurality of visual content regions may correspond to visual content for a design of a web page, but the design file is not readily transferrable to a format suitable for deployment as a web page. It is noted that the design file 300 may also include other elements, such as background content (e.g., visual content element regions 302, 306, 308, 310, 312, 318, and 320 may have image-based backgrounds while visual content element regions 304, 314, 316 have color-based backgrounds). In some aspects, the visual content within one of more of the visual content element regions may involve layering different visual elements, such as to have a background layer defining the visual content of the background, another layer including image information, and another layer including textual content. These layers may be overlayed on top of each other to create the overall visual effect of each visual content element region. Furthermore, more complex layering of visual content may also be utilized in a design file, such as to overlay multiple layers images and or text with varying degrees of transparency to create more appealing visual effects.

[0028] Referring back to FIG. 1, the content generation device 102 includes a conversion engine 120, a code generation engine 128, and one or more interfaces 130. In an aspect, software supporting the functionality provided by the conversion engine 120, the code generation engine 128, and the one or more interfaces 130 may also be stored at the memory 104 as the instructions 108. As shown in FIG. 1, the conversion engine 120 may include a plurality of modules, such as a layer management module 122, a content extraction module 124, and an interpreter module 126. These modules may provide functionality to support operations of the conversion engine 120 for converting design files into interactive web content, as described in more detail below.

[0029] The code generation engine 128 may be configured to generate one or more types of code to facilitate presentation of the visual content of a design file as a web page or another form of presentation media (e.g., a mobile application, a widget, a social media platform, and the like). For example, the code generation engine 128 may be configured to generate HTML code, JavaScript code, or other types of code to facilitate presentation of the visual content of the design file in a particular medium (e.g., web page, application, etc.) and to facilitate interactivity with the content (e.g., navigation links, animations, dynamic content displays, etc.). To generate the code, the code generation engine 128 may utilize pre-defined code or functions stored in the one or more databases, such as the code libraries described above, to select code for use in generating interactive content, as will be described in more detail below. As described in more detail below with reference to FIGs. 2 and 4-10, the interfaces 130 provide graphical user interfaces (GUIs) and functionality to support generation of interactive content from design files.

[0030] The layer management module 122, content extraction module 124, and the interpreter module 126 may be configured to extract information from the design files that may be used to identify various visual elements within the design file. For example, the layer management module 122 may be configured to extract information about the number of layers. The content extraction module 124 may extract information about the various visual elements in the design file, such as text content, font information, font size and/or style information, names of images, image size information, or other types of information identifying the different items of visual content within the design file. In an aspect, the information about the various visual elements in the design file may be extracted by analyzing metadata associated with the design file. Exemplary types of metadata that may be used to identify visual elements within a design file are described with more reference to FIGs. 2 and 4-10 below.

[0031] The interpreter module 126 may be configured to interpret content of the design file. For example, certain visual elements of the design file may correspond to icons that may be used to provide interactivity and functionality to a user with respect to the visual elements included in the design file, as described in more detail below with reference to FIGs. 7-8B and 10. The interpreter module 126 may be configured to determine whether a visual element of the design file is intended (i.e., by the designer) to be an interactive element, such as a button icon or a carousel display menu. Once the interactive elements are identified, the interpreter module may determine whether any of the other (non-interactive) visual elements of the design file correspond to the identified interactive elements. For example and as described in more detail below with reference to FIGs. 7-8B and 10, when interactive elements associated with a carousel display menu are detected, the interpreter module 126 may be configured to determine which visual elements of the design file correspond to the visual content to be displayed within the carousel display menu. Additionally, the interpreter module 126 may provide functionality for designating regions within the design file and for interpreting which visual elements of the design are within the designated regions, as will be more apparent from the description of FIGs.

2 and 4-10 below.

[0032] The conversion engine 120 is configured to use the functionality of the layer management module 122, the content extraction module 124, and the interpreter module 126 to facilitate generation of interactive content. Aspects of the interactive content generation functionality provided by the conversion engine 120 and its various modules may be supported by one or more of the GUIs associated with the interface(s) 130. For example and referring to FIG. 2, a block diagram illustrating an exemplary graphical user interface supporting conversion of design files into interactive visual content in accordance with the present disclosure is shown as a GUI 200. As shown in FIG. 2, the GUI 200 may include a design file viewer 210, one or more UI elements 220, a metadata viewer / editor 230, and an interactivity builder 240. The design file viewer 210 may enable a user to view the visual content within a selected design file. For example, the view of the design file 300 shown in FIG. 3 may represent one possible representation of a design file viewer suitable for use in accordance with the present disclosure. It is noted that the presentation of the design file within the design file viewer is not limited to displaying the entire design file within the viewing area and the user may zoom in on portions of the design file, which may cause other portions of the design file to be outside of the viewing area. Additionally, the design filer viewer 210, another region of the GUI 200, or a pop-up window, may be configured to present a preview of an interactive experience being designed using the content generation device 102 of FIG. 1.

[0033] The UI elements 220 may include interactive controls (e.g., buttons, menus, search bars, etc.) that enable a user to select and load a design file into the GUI 200. Additionally, the UI elements 220 may include interactive controls for manipulating the design file, such as to zoom in on a region of the design file, crop the design file, associate a portion of the design file (e.g., a cropped portion of the design file) with an experience (e.g., an interactive media experience being generated using the content generation device 102), or other operations. Additionally, the UI elements 220 may also provide other types of design file and experience creation tools, as described in more detail below with reference to FIGs. 6 and 7.

[0034] The metadata viewer / editor 230 provides functionality for verifying, viewing, and manipulating the metadata associated with a design file or portions of the design file (e.g., a portion associated with a particular experience). For example, the metadata viewer / editor 230 may enable the user to view attributes (e.g., the font name, size, and style of a textual visual object, a transparency setting, etc.) of visual elements of the design file, where such attributes may have been extracted by the content extraction module 124 from the metadata of the design file. Additionally, the functionality of the metadata viewer / editor 230 may modify or enable the user to modify one or of the attribute (or metadata) values. As another example, the metadata viewer / editor 230 may provide functionality for grouping or associating one or more visual elements into a group, such as to associate multiple pieces of visual content in the design file with a single experience element, such as a carousel display or alternative displays (e.g., a set of images that are periodically displayed as replacements for each other) as non-limiting examples. Exemplary aspects of grouping functionality are described in more detail below with reference to FIGs. 6, 8 A, 8B, and 10.

[0035] The interactivity builder 240 provides functionality for imparting interactive elements to the experience. For example, the interactivity builder 240 may detect the design file includes visual elements associated with a carousel display and coordinate with the code generation engine 128 to generate code for facilitating the interactivity of the carousel display. The interactive elements of the experience created by the interactivity builder may also include other types of interactive modifications to the design file, such as animations, transparency changes (e.g., a visual elements changes from semi-transparent to opaque when a user hovers a mouse cursor over it, etc.), or other types of alterations that create a more engaging experience for a user that is viewing the interactive content. The interactivity builder 240 may also provide functionality for associating visual elements of the design file or portions thereof with URLs or actions to be performed when a user interacts with those visual elements, as described in more detail below.

[0036] Further illustrative aspects of the above-described functionality provided by the conversion engine 120 and the code generation engine 128 will now be described with reference to FIGs. 4-10. Referring to FIG. 4, a block diagram illustrating aspects of generating interactive content from a design file in accordance with the present disclosure is shown. It is noted that information presented in FIG. 4 may be one possible view provided by the GUI 200 of FIG. 2. For example, in FIG. 4, a portion of a design file is shown as visual content 410, which may correspond to the portion of the design file 300 shown at visual content element region 306 in FIG. 3 and may be presented in the design file viewer 210 of the GUI 200 of FIG. 2. As can be seen in FIG. 4, the visual content 410 may include textual content 412, icon or shape data 414, and image content 416 (e.g., an image of two watches in front of a background). In an aspect, the different types of visual content 412, 414, 416 shown in FIG. 4 may be identified by the content extraction module 124 of FIG. 1. Additionally, that the visual data associated with the icon or shape data 414 may be distinguished from the textual content 412 and the image content 416 based on the interpreter module 126, which may analyze the metadata of the design file to identify these various visual elements.

[0037] As shown on the right-hand side of FIG. 4, information about the various visual content elements may be displayed in GUI elements 420, 422, 430, 432, 440, 442, 450, 452. In the non-limiting example shown in FIG. 4, the GUI element 420 is associated with and identifies image content 416 and GUI element 430 is associated with text content 412. Moreover, the GUI elements 420, 430 may include additional tools providing functionality for converting design files or portions thereof into interactive content and experiences. For example, the GUI element 420 may include layer data 422, which may enable identification of one or more layers associated with a visual element. The layer data 422 may be used to facilitate interactivity or other improved experiences with the associated visual content, such as to hide or show layers of the visual content. Additionally, the layer data 422 may be used to detect the layers associated with a portion of the visual content that is of interest and then combining those layers to form visual content having a single layer (e.g., to minimize loading times on web pages).

[0038] The GUI element 430 includes a data field 432 presenting the content of the textual content 412. The data field 432 may enable a user to modify the textual content. Clicking the eye icon in the top right of the GUI element 430 may enable the user to preview changes to the textual content 412 made within the data field 432. Additionally, clicking the arrow icon to the left of the eye icon may present additional text modification and control elements that may enable the user to change the font, size, style, or other attributes of the textual content 412 or any modifications made to the textual content 412 via the data field 432. In some aspects, visual elements presented in the design file viewer / editor window may be associated with more than one GUI element. For example, in FIG. 4 the GUI elements 440, 450 are associated with the icon 414, with the GUI element 440 being associated textual content of the icon 414, shown in data field 442, and the GUI element 450 being associated with the icon shape, identified in GUI element 452. In an aspect, the GUI element 440 and the data field 442 may provide similar functionality and control as the GUI element 430 and the data field 432, enabling the user to edit and modify the text of the icon 414, and the GUI elements 450, 452 may enable the user to modify the shape of the icon 414. It is noted that the exemplary GUI elements and controls shown in FIG. 4 are provided for purposes of illustration, rather than by way of limitation and that other GUI elements and functionality may be provided by the interface(s) 130 of the content generation device 102 in accordance with the present disclosure, as described in more detail below.

[0039] Referring to FIG. 5, a screenshot of an exemplary web page generated in accordance with aspects of the present disclosure is shown. In an aspect, the web page shown in FIG. 5 may be presented by the interface(s) 130 of FIG. 1 in a preview display area of the GUI 200 or as a pop-up display area. For example, the GUI 200 may include a preview button that, when activated, populates the preview display area with the visual content configured using the GUI 200. For example, the user may implement various configuration changes and modifications to the visual content, as described above with reference to FIG. 4, and may use the preview control to view the visual content as it would be displayed to a user once the visual content is published or pushed to a desired medium such as to a web page of a website.

[0040] Referring to FIG. 6, a block diagram illustrating additional experience and interactivity controls for generating content from a design file in accordance with the present disclosure are shown. In FIG. 6, a GUI element 620 is shown. The GUI element 620 may provide functionality for enhancing the experience of the end user based on the content of the design file. For example, as shown in FIG. 6, the GUI element 620 includes image element 622, image element 624, and layer information element 626. The GUI elements 622, 624 may enable the user to select multiple pieces of image content and associate them with the experience being designed. For example, a user may associate a first image with the GUI element 622 and a second image with the GUI element 624. The layer information element 626 may information associated with the layers within the design file in which the GUI elements 622, 624 are located, which may facilitate extraction (e.g., by the content extraction module 124) from the design file. In an aspect, the information presented in the layer information element 626 may be provided by the layer management module 122 of FIG. 1. It is noted that in some instances multiple layer information elements 626 may be presented (e.g., if the images are located in different layers or if a single image includes visual content from multiple layers).

[0041] By enabling the user to define multiple images within an experience using the GUI elements of FIG. 6 various types of interactivity and experiences may be defined using the interactivity builder 240 of the GUI 200 of FIG. 2. For example, associating multiple images with an experience may enable an animation to be defined, such as fading the first image out of view and fading the second image into view (e.g., every X seconds in a loop), sliding the second image into view (e.g., from the top, bottom, left, right, etc.) after the first image has been displayed for a threshold period of time, or some other form of animation may be defined. Exemplary aspects of such functionality are described further with reference to FIG. 10. As another non-limiting example, functionality of the activity builder may enable the first image or the second image to be presented or displayed to the user upon visiting a web page based on the time the user accesses the web page (e.g., if the user accesses the web page at particular time, the first image is presented and the second image is presented when the user accesses the web page at another time, such as by taking the modulo of the time of user access and presenting the first image when the result is 0 and the second image when the value is 1). Other types of animations and interactivity could also be defined.

[0042] Referring to FIG. 7, a block diagram illustrating additional experience and interactivity controls for generating content from a design file in accordance with the present disclosure is shown. In FIG. 7, GUI elements associated with configuring animations of visual content within a design file are shown and include a data field 710, a linking field 720, predefined animation control elements 722, 724, and a target field 726. The target field 726 identifies the visual element associated with the other controls shown in FIG. 7. The data field 710 enables text content to be defined for display to the end user viewing the experience. Unlike the data field 442, the data field 710 may enable alternative text to be defined, such as to enable the text displayed in as the user views of interacts with the designed experience to change, thereby enabling different pieces of text to be displayed to a user within a single experience. The linking field 720 enables a URL to be associated with visual content, such as an icon, image, or text visual element identified by the target field 726. The animation control elements 722, 724 enable pre-defined animations to be associated with the visual element identified by the target field 726. In the specific example shown in FIG. 7, animation control element 722 is associated with a pulse animation that is activated as a mouse-over effect such that, when the end user moves the mouse cursor over the visual element identified by the target field 726. In the context of a pulse animation, the visual element identified by the target field 726 may become larger or smaller than its original size, creating a pulse-like animation of the target visual element, as shown at 728. The animation control element 724 may enable the pulse animation to be toggled between looped pulsing and single pulse. It is noted that the exemplary animation and interactivity controls shown in FIG. 7 have been provided for purposes of illustration, rather than by way of limitation and that a content generation device or system in accordance with the present disclosure may provide other types of animations and interactivity controls.

[0043] Referring to FIG. 8 A, a block diagram illustrating aspects of converting a design file into an interactive display is shown. In particular, in FIG. 8A the visual content of the visual content element region 314 of the design file 300 of FIG. 3 is shown, which may be converted by the content generation device 102 of FIG. 1 into a carousel display. The content element region 314 is shown in block diagram form at 314’ and includes a plurality of images 812, 814, 816, 818 depicting different styles of watches, graphics or icons 820, 822, 824, and textual elements 830, 832, 834, 836, 838. As briefly described above, the interpreter module 126 may be configured to detect that the graphics or icons 820, 820 are positioned at either end of a row of images (e.g., the images 812-818). In the design file there may be additional images similar to the row of images 812-818, but that are not disposed between the graphics or icons 820, 822. For example and referring to FIG. 8B, the plurality of images may also include images 842-846 The interpreter module 126 may interpret such an arrangement as a carousel display such that activation of the graphic or icon 820 shifts the image 818 out of view to the right and brings the image 846 into view from the left (e.g., image 818 disappears, image 816 replaces image 818 as the rightmost image, image 814 replaces image 816 as the second from right image, image 812 replaces image 814 as the second from leftmost image, and image 846 replaces image 812 as the leftmost image; and activation of the graphic or icon 822 shifts the image 812 out of view to the left and brings the image 842 into view from the right (e.g., image 812 disappears, image 814 replaces image 812 as the leftmost image, image 816 replaces image 812 as the second from left image, image 819 replaces image 816 as the second from left right, and image 842 replaces image 818 as the second from rightmost image). Successive activation of the graphics or icons 820, 822 may scroll through the images 812-818 and 842-846 while shifting the images left or right with each activation. It is noted that the textual elements corresponding each image may similarly be shifted in response to activation of the graphics or icons 820, 822. Upon detecting the arrangement of the graphics or icons 820, 822 and the images for the carousel, the code generation engine 128 may automatically generate appropriate code (e.g., HTML, JavaScript, etc.) using one or more code libraries to facilitate animation of the carousel display during display of the experience on a web page or other medium. [0044] Referring to FIG. 9, a block diagram illustrating layers of visual content in design files in accordance with the present disclosure is shown. As described above, a design file, such as the design file 300 of FIG. 3, may include pieces of visual content that have been layered on top of each other in a particular order. This is shown in FIG. 9 where an image 912 is layered on top of a background image 910 to form visual content 920. In modifying visual content using the techniques described above the layer information may be used to detect which piece of visual content to associate with the modification. For example, if the visual content 920 included textual elements, the text data of those textual elements may be included in a particular layer. Thus, when utilizing functionality of the content generation device 102 of FIG. 1, such as the textual element modification techniques described above, the ability to associate various types of visual elements from the design file with layers may ensure that the visual content can be rendered appropriately during publication to an external medium, such as a web page.

[0045] Referring to FIG. 10, a block diagram illustrating an exemplary web page generating by conversion of a design file in accordance with the present disclosure is as a web page 1000. The exemplary web page 1000 has been generated from the design file 300 of FIG. 3 using various ones of the techniques described above. As can be appreciated from the description above, the web page 1000, while generated based on the design file 300 of FIG. 3, is not merely a recycling of the visual content of the design file. For example, the web page 1000 only displays, at region 1010, the visual content of only one of the visual content element regions 302-308. It is noted that while the visual content of only one of the visual content element regions 302-308 is shown, the visual content may be rotated or changed, such as displaying the visual content of the visual content element region 302 for a period of time or for a particular visit to the web page 1000 and other visual content of the visual content element regions 304-308 may be displayed during other periods of time or other visits to the web page 1000. As described above with reference to FIGs. 8 A and 8B, two carousel displays may be included in the web page with only those images and corresponding textual content visible between the graphics or icons 820, 822, as shown at regions 1020, 1030 of web page 1000. It is noted that other portions of the design file may also be present on the web page 1000, which would be visible if the user scrolled down using the slider control 1002. For example, if the user scrolled the web page down using the slider control 1002 one or both of the visual content element regions 318, 320 of the design file 300 may be displayed on the web page 1000. [0046] Referring back to FIG. 1, content generated by the content generation device 102 using the various techniques described above may be presented to a user of the user device 140. As shown in FIG. 1 a user device 140 having a display device 142, one or more processors 144, and a memory 146 is shown. The user device 140 may be a personal computing device, a laptop computing device, a tablet computing device, a smartphone (or other device, such as smartwatch), a personal digital assistance (PDA), a gaming console, a virtual reality (VR) device, an augmented reality (AR) device, or another type of computing device operable to present content to a user. The user device 140 may include an application, such as a web browser, that enables the user to view (at the display device 142) the content generated by the content generation device 102, such as the web page 1000 of FIG. 10. As can be appreciated from the exemplary operations described above, the web page 1000 may provide various interactive and animated features that are automatically generated using the functionality of the conversion engine 120 and the code generation engine 128. Moreover, the interactivity and animations are generated based from the design file itself, without requiring the design file to be provided to a web developer that then has to regenerate the visual content of the design file and manually create the code. Notably, unlike that prior manual process, the conversion engine 120 includes sets of rules (e.g., the functionality of the layer management module 122, the content extract module 124, and the interpreter module 126) that enable the conversion engine 120 to automatically detect elements of the design file that are interactive (e.g., clickable icons, carousel displays, etc.) or to be animated or dynamically presented (e.g., the visual content element regions 302-308 of the design file 300 may be rotated for display at the region 1010 of the web page 1000). Such functionality enables a user that does not have skills in writing code (e.g., HTML, JavaScript, etc.), such as a designer, to generate interactive content and animations for content in a design file without requiring the user to create code or pass the design file to a developer to create code. Such features represent improvements to the functionality of computing devices used to generate interactive and animated content and web development tools by enabling automation of such tasks from a design file directly, rather than requiring the design file be passed to a developer who then subjectively creates content using additional pieces of content generated by the developer based on the design file.

[0047] In some aspects, operations of the conversion engine 120 may also utilize the techniques of commonly owned U.S. Patent Application No. 17/688,864, filed March 7, 2022, and entitled “MULTI-LINK COMPOSITE IMAGE GENERATOR FOR ELECTRONIC MAIL (E-MAIL) MESSAGES”, which claims priority to U.S. Provisional Application No. 63/171,490, filed April 6, 2021, and entitled “MULTI-LINK COMPOSITE IMAGE GENERATOR FOR ELECTRONIC MAIL (E-MAIL) MESSAGES”, the contents of which are incorporated herein by reference in their entirety. For example, the above-referenced applications describe functionality for creating interactive regions or areas within an image by dividing the image into a grid and defining HTML code to present the different pieces of image content in each section of the grid as a single image while associating actions or interactivity with one or more pieces of the image content. For example, the content extraction module 124 and/or the interpreter module 126 may provide functionality for enabling the user to define a target region of interest within the design file, such as the cropping feature described above with reference to FIG. 4 (e.g., where the visual content element region 302 was selected for further processing in the GUI 200 of FIG. 2), and the icon 414 may be designated as an interactive element. Using the techniques of the above-referenced applications, the conversion engine 120 may divide the portion of the design file into a grid such that the icon 414 is in a single grid section and other portions of the visual content element region 302 are disposed in other sections of the grid (e.g., a section above the icon 414, a second below the icon 414, and sections to the right and left of the icon 414). The code generator engine 128 may then generate code for presenting a composite representation of the visual content element region 302, where the code enables presentation of portions of the visual content element region 302 as individual elements according to the grid arrangement. Also, one or more of the grid sections may be associated with actionable code, such as links (or URLs), animations, or other actions, as described above with reference to FIGs. 4-10.

[0048] It is noted that the foregoing techniques for generating interactive elements within an image may improve the speed at which web pages are loaded by enabling the different image portions corresponding to each grid section to be downloaded simultaneously (e.g., as smaller files), which may cause the web page to be rendered faster on the web browser of the user. Furthermore, this may also result in more smooth animation sequences, such as for carousel displays, by enabling the images that are not shown to be retrieved faster and/or simultaneously with the images that are displayed.

[0049] It is noted that while the foregoing description has primarily described generating web content, such as a web page, the exemplary techniques for generating interactive and animated content are not limited to web page content generation. Instead, it should be understood that the disclosed techniques could also be applied to other types of visual content presentation formats, such as mobile applications, social media platforms, and the like. Additionally, it should be understood that while FIG. 1 illustrates the designer device 150 and the content generation device 102 as separate devices, the functionality provided by these two devices may be integrated within a single device or may be deployed as part of a design and publish platform in a cloud-based implementation, shown as cloud-based content generation device 162 in FIG. 1. A person of ordinary skill in the art would recognize that the ability to publish or push web-ready and animated or interactive content directly from a design file represents a departure from conventional industry practice in which it is customary to pass or hand-off the design file to a developer, and represents an improvement to the functioning of computing devices, software tools, and platforms used to generate and publish visual content in an interactive and/or animated medium.

[0050] Referring to FIG. 11, a flow diagram of an exemplary method for a flow diagram of an exemplary method for dynamically transforming design files into interactive web content according to aspects of the present disclosure is shown as a method 1100. In an aspect, steps of the method 1100 may be stored as instructions (e.g., the instructions 108 of FIG. 1) that, when executed by one or more processors (e.g., the one or more processors 104 of FIG. 1), cause the one or more processors to perform the method 1100.

[0051] At step 1110, the method 1100 includes receiving, by one or more processors, a design file comprising a plurality of visual content elements. As described above with reference to FIGs. 1-10, the plurality of visual content elements of the design file may include image content, textual content, or both. It is noted that the image content may include shapes or graphics, as well as other types of image content. At step 1120, the method 1100 includes identifying, by the one or more processors, at least one visual content element of the plurality of visual content elements as an actionable visual content element. For example, the actionable visual content element may be identified using the interpreter module 126 and/or the techniques described above with reference to FIGs. 4-8B.

[0052] At step 1130, the method 1100 includes determining, by the one or more processors, one or more actions associated with the at least one actionable visual content element. As described above with reference to FIGs. 1-10, the one or more actions may corresponding to actions to enable interactivity with respect to the at least one actionable visual content element, such as clicking on a button or other icon or animating the visual content element as a mouse-over effect, actions to occur in response to interactivity with the at least one actionable visual content element (e.g., scrolling through a carousel display), other types of actions, or combination thereof.

[0053] At step 1140, the method 1100 includes generating, by the one or more processors, actionable content from the design file based on the at least one actionable visual content element and the one or more actions. As described above, the actionable content may include code configured to execute the one or more actions, such as code to cause a visual element to pulse once or in a loop as a mouse-over effect, and code for displaying the actionable content via one or more presentation mediums (e.g., HTML or other code). The code configured to execute the one or more actions and the code for displaying the actionable content may be automatically generated based on one or more code libraries, such as the code library described above with reference to FIG. 1.

[0054] At step 1150, the method 1100 includes publishing, by the one or more processors, the actional content to at least one of the one or more presentation mediums. For example, publishing the actionable content may cause a web page to be generated that may be visited by a user, such as the web page 1000 of FIG. 10. It is noted that the presentation mediums are not limited to web pages and may instead include applications, social medium platforms, other types of platforms or mediums where visual content may be displayed, or a combination thereof.

[0055] As shown above, the method 1100 and the system 100 enable static visual content of a design file to be published directly from the design file in a manner that imparts interactivity and animation of the visual content of the design file without requiring programming knowledge or in-depth knowledge of code (e.g., JavaScript, HTML, etc.). Such capabilities represent a new technique for rapidly deploying content of a design file to various presentation mediums and eliminates the drawbacks of prior approaches, which required use of a web developer to create interactive content by rebuilding the design file using additional tools and extensive code knowledge. As explained above, the techniques for generating actionable content for publication to presentation mediums in accordance with the present disclosure may enable content to be presented faster via the one or more presentation mediums, such as by using the grid approach described above. Furthermore, the grid approach may also enable injection of actionable visual elements where no specific visual element (e.g., a button or icon) is present, thereby providing a more robust platform for converting static visual content of design files into interactive and animated content in an intuitive manner. [0056] Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.

[0057] Components, the functional blocks, and the modules described herein with respect to FIGs. 1-11 include processors, electronics devices, hardware devices, electronics components, logical circuits, memories, software codes, firmware codes, among other examples, or any combination thereof. In addition, features discussed herein may be implemented via specialized processor circuitry, via executable instructions, or combinations thereof.

[0058] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. Skilled artisans will also readily recognize that the order or combination of components, methods, or interactions that are described herein are merely examples and that the components, methods, or interactions of the various aspects of the present disclosure may be combined or performed in ways other than those illustrated and described herein.

[0059] The various illustrative logics, logical blocks, modules, circuits, and algorithm processes described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally, in terms of functionality, and illustrated in the various illustrative components, blocks, modules, circuits and processes described above. Whether such functionality is implemented in hardware or software depends upon the particular application and design constraints imposed on the overall system.

[0060] The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or any conventional processor, controller, microcontroller, or state machine. In some implementations, a processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some implementations, particular processes and methods may be performed by circuitry that is specific to a given function.

[0061] In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware, including the structures disclosed in this specification and their structural equivalents thereof, or any combination thereof. Implementations of the subject matter described in this specification also may be implemented as one or more computer programs, that is one or more modules of computer program instructions, encoded on a computer storage media for execution by, or to control the operation of, data processing apparatus.

[0062] If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The processes of a method or algorithm disclosed herein may be implemented in a processor-executable software module which may reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that may be enabled to transfer a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media can include random-access memory (RAM), readonly memory (ROM), electrically erasable programmable read-only memory (EEPROM), CD- ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection may be properly termed a computer-readable medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, hard disk, solid state disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and instructions on a machine readable medium and computer-readable medium, which may be incorporated into a computer program product.

[0063] Various modifications to the implementations described in this disclosure may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to some other implementations without departing from the spirit or scope of this disclosure. Thus, the claims are not intended to be limited to the implementations shown herein, but are to be accorded the widest scope consistent with this disclosure, the principles and the novel features disclosed herein.

[0064] Additionally, a person having ordinary skill in the art will readily appreciate, the terms “upper” and “lower” are sometimes used for ease of describing the figures, and indicate relative positions corresponding to the orientation of the figure on a properly oriented page, and may not reflect the proper orientation of any device as implemented.

[0065] Certain features that are described in this specification in the context of separate implementations also may be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also may be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

[0066] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Further, the drawings may schematically depict one more example processes in the form of a flow diagram. However, other operations that are not depicted may be incorporated in the example processes that are schematically illustrated. For example, one or more additional operations may be performed before, after, simultaneously, or between any of the illustrated operations. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products. Additionally, some other implementations are within the scope of the following claims. In some cases, the actions recited in the claims may be performed in a different order and still achieve desirable results.

[0067] As used herein, including in the claims, various terminology is for the purpose of describing particular implementations only and is not intended to be limiting of implementations. For example, as used herein, an ordinal term (e.g., “first,” “second,” “third,” etc.) used to modify an element, such as a structure, a component, an operation, etc., does not by itself indicate any priority or order of the element with respect to another element, but rather merely distinguishes the element from another element having a same name (but for use of the ordinal term). The term “coupled” is defined as connected, although not necessarily directly, and not necessarily mechanically; two items that are “coupled” may be unitary with each other, the term “or,” when used in a list of two or more items, means that any one of the listed items may be employed by itself, or any combination of two or more of the listed items may be employed. For example, if a composition is described as containing components A, B, or C, the composition may contain A alone; B alone; C alone; A and B in combination; A and C in combination; B and C in combination; or A, B, and C in combination. Also, as used herein, including in the claims, “or” as used in a list of items prefaced by “at least one of’ indicates a disjunctive list such that, for example, a list of “at least one of A, B, or C” means A or B or C or AB or AC or BC or ABC (that is A and B and C) or any of these in any combination thereof. The term “substantially” is defined as largely but not necessarily wholly what is specified - and includes what is specified; e.g., substantially 90 degrees includes 90 degrees and substantially parallel includes parallel - as understood by a person of ordinary skill in the art. In any disclosed aspect, the term “substantially” may be substituted with “within [a percentage] of’ what is specified, where the percentage includes 0.1, 1, 5, and 10 percent; and the term “approximately” may be substituted with “within 10 percent of’ what is specified. The phrase “and/or” means and or.

[0068] Although the aspects of the present disclosure and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit of the disclosure as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular implementations of the process, machine, manufacture, composition of matter, means, methods and processes described in the specification. As one of ordinary skill in the art will readily appreciate from the present disclosure, processes, machines, manufacture, compositions of matter, means, methods, or operations, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or operations.