Pen interactions
Handheld tools have always been a key component of human communications and productivity. From the earliest stone carvings to the modern pen, handwriting is not only familiar to us, it is second nature. Ink allows us to impart our own unique style to content while having fun with the process. With ink we can simplify tasks that are tedious today – writing signatures, adding symbols, explaining STEM concepts, and much more.
To effectively communicate, people often illustrate ideas or add annotations during the revision process. Ink stands out while on top of content, drawing attention and driving action to points of interest. In the modern workplace, users likely go from low fidelity brainstorming and sketches to high-fidelity, presentable content with feedback and edits along the way. Digital pens are ideal for this process.
We are no longer bound to a PC for getting work done, with Android devices like the Surface Duo offering mobile productivity. Inking comes naturally to the ergonomic design of the device, which makes it feel like a notepad – giving more room to sketch, annotate, and express thoughts and ideas.
Pen experiences should feel simple and familiar. These behaviors should not depart too far from how people use their analog pen on printed paper. Writing and scribbling out words on paper feels natural, and users expect a digital pen to behave the same way. Especially when it comes to erasing, the pen should behave naturally – for example, flipping the pen to the eraser tip to erase written text.
Scenarios like using a pen to select text and perform an action should be similar to selecting text with touch. The pen should behave essentially like an extension of the hand, removing barriers and providing seamless contact with the surface of the device.
Pen experiences should feel powerful in helping with serious tasks during a workflow, often being the fastest and most convenient way to complete tasks. Some experiences are unique to the Surface Duo, like drag-and-drop of text between two apps across two screens. For example, consider a common scenario where a browser is open on one screen and the user wants to share some text or a link with the message app that's open on the other screen. Using a pen, it's easy to select the text, drag it across the screen and drop it onto the other screen, then make any needed changes before sharing.
A pen can be used to select, edit, or move all or just part of inked content so the user doesn't have to erase and re-write it. When the button on the barrel of the pen is pressed, the user can use the powerful lasso tool to select all or part of the inked content, which can then be deleted, duplicated, or moved across the canvas.
A pen can also be powerful when it is in close proximity to the screen, but not touching. Hovering over certain components can expose more UI or actions for a user to take without having to tap through multiple levels that are usually hidden beneath the surface.
A pen and inking capability can leverage AI and machine learning to make the pen smarter, enabling great outcomes faster and more easily than ever before. Features like auto-converting shapes after being drawn make the pen input more powerful.
A pen gives users freedom to express their creativity, thoughts, or personality through ink in a way that keyboard, mouse, and touch cannot. Sketching ideas with a pen can help people think in non-linear ways. Drawings can add a layer of meaning and feeling to content that is hard to express simply with words.
A pen should let users express themselves, but also provide assistance in achieving the desired end result for their scenario. For example, different behavior is expected from a pen in a drawing application than when it is used to enter text into a search box. Or, in a video chat app, the user might switch the pen between multiple uses – the pen could act as a pointing device to direct attention to specific content, and at the same time serve as an annotation tool for taking notes.
The physical pen device that the user interacts with may be one of a variety of industrial designs. However, all compatible Surface pen devices have a pressure sensitive tip and an erase affordance. The erase affordance can be implemented as a physical button on the pen, or as a tail-end eraser (similar to a traditional pencil). The barrel button lets users perform lasso selection (the ability to select an inked element and manipulate it).
It is important to understand the various physical positions (or orientations) in which the pen might be, and the scenarios associated with these positions. You should also understand how the transitions from position to position should be reported for the Surface Duo, and how that relates to the range of postures and orientations the Surface Duo can have as it opens and closes.
- Out of Range - This is the simplest scenario for pen, and it occurs when the user is holding the pen out of the detection range of the digitizer.
- In Range - This is a common scenario for pen, and it occurs when the user is holding the pen within the detection range of the digitizer, but not in contact.
- In Contact - This is the most common scenario for pen, and it occurs when the user is pressing the pen against the screen surface.
- Out of Range (Intent to Erase) - In this state, the user has activated the erase capability of the pen, either by inverting it or by pressing (and holding) the erase button while the pen is out of the detection range of the digitizer. The erase capability of the pen is also referred to as the erase affordance.
- In Range (Intent to Erase) - This is a common scenario for pen, and it occurs when the user is holding the pen within the detection range of the digitizer with the erase affordance activated, either by inverting the pen or by pressing (and holding) the erase button.
- Erasing - This is a common scenario for pen, and it occurs when the user is pressing the pen against the screen surface, with the erase affordance activated either by inverting the pen or by pressing (and holding) the erase button.
Inking on dual screens allows more room for users to perform gestures on the content that they interact with. While users are creating or editing text content on the Surface Duo, the following seven gestures should be supported.
Delete Text
Scribble or cross out words, lines, and paragraphs to delete them.
Insert Text
Draw an inverted V, representing the caret symbol:
^
.New Line
Draw an 'L', representing return carriage. Can be replaced between two words or at the end of a line.
Join
Draw an 'U' to remove whitespace between words.
Split
Draw a line to add a space between characters.
Select
Circle to select text and other content (shapes, pictures, etc.)
Highlight
Use ink highlighter on text to add real highlights. Repeat to un-highlight content.
Inking on dual screens allows more room for users to perform gestures on the content that they interact with. While users are creating or editing content in a table on the Surface Duo, the following eight gestures should be supported.
Insert Row
Draw a horizontal line within an existing row, or draw a new row at the edge of the table.
Delete Row
Draw a delete gesture (squiggly line) across the row. Existing text is deleted first, followed by the row cells.
Merge cells
Draw the delete gesture (squiggly line) on a border between two cells to merge the cells.
Cell Shading
Highlighter shading transforms into the fill color for underlying cells.
Insert Column
Draw a vertical line within an existing row, or draw a new column at the edge of the table.
Delete Column
Draw a delete gesture across the column. Existing text is deleted first, followed by the entire column.
Split Cell
Draw a line within a cell to split it into two cells. Text remains in the leftmost cell.
Delete Table
Draw a delete gesture (squiggly line) across the table. Existing text is deleted first, followed by the table.