Spatial Transform Engine Wizard
How to use the Spatial Transform Engine Wizard (STEW)
Table of Contents |
---|
Introduction
This user guild is for the companion configuration manager. It's designed to create the timing and spatial transforms for companion chips:
AP0102, AP0200, AP0201, AP0202
With sensors
AR0132, AR0136, AR0138, AR0140, AR0143, AR0147, AR0231, AR0233.
And the stacked chips
AS0142AT, AS0142ATE, AS0143AT, AS0143AT1, AS0147AT, AS0148AT
Note, not all combinations are available, check for ini files. This application is intended to replace STE for configuring AP0102 & AP0200 and Register Wizard for configuring the AP0201 & AP0202. It is also intended to support the AP0300 in the future.
Getting started
The companion configuration tool can be run as a Devware plugin for live updates of the companion processors or it can be launched as a standalone application. The same UI is used in either case.
The plugin can write settings directly to hardware.
The standalone application can be used to create and explore available configurations when the hardware is not available. Available here; \aptina imaging\bin\stewapp.exe
From a Command Line, you can start up the application and include the configuration file to be used.
For example; stewapp.exe -u -i <configuration.cfg>
The CLI (Command Line Interface) User Guide can be found here.
Prerequisites
You will need one of the above sensors, and AP 200 series processor and demo3 board. Use an AP0102 or AP0202 and communicate over USB if it's available. Downloading settings over Ethernet often cuts further communications and requires the hardware to be reset. Run Devware and check your configuration can be initialized and creates streaming video.
Launching the plugin
With the above hardware attached, select the 'STE Wizard' or 'Register Wizard' on the 'plug-ins' menu. A user interface similar to this should appear. It should have the correct companion processor and sensor identified. It may or may not have all the tabs along the top (More on this later)
Writing to hardware
The application should be ready to write a solution to the hardware immediately without changing any settings. How to do this depends on the type of AP processor you have and if it has a Ste.
Writing to hardware (No Ste: AP0201, AP0202)
In this case the UI should look like the image below (no tabs, these are all for configuring the Ste) and a write button on the front page. If you are writing a solution sometimes it can be hard to tell if it worked. Adjust the frame rate a bit, write the solution again and look for changes in frame rate measured by Devware.
Writing to hardware (With Ste: AP0102, AP0200)
Writing the settings for a processor with a Ste must be done from the transforms tab. Select the transforms tab and then press the 'write' button on the transform. This will create a default de-warp and download and run it.
Parameters
At this point you can change any parameters you wish and continue to test solutions. See other sections of the document which describe the parameters in detail. At any point the file menu can be used to save parameters into a configuration file which can be loaded back into the plugin and restores all the UI setting. The configuration file does not contain the register and blob applied to the hardware (like STE used to). The configuration file can be used to initial either the plugin or the standalone application so it is possible to develop parameters with no hardware then try them later when hardware is available.
Viewing and overriding results
To see the actual settings go to the view menu and launch the register view. This shows all the current register, Ste blob and bit field setting that are downloaded to the hardware.
All of this values can be overridden. To override a value type a number into the Override column in the table. Once a value has been given for an override it will be used for all transforms and any writes to hardware, the generation of ini files and the generation of xml for flashtool. It can be cleared using a right click context menu in the cells and selecting 'Clear'.
Values which are not written in the Ste Blob are registers and can be disabled. A disabled register will not be written to hardware, ini files or xml for flashtool. To disable a register right click and select 'Disable'. It can also be cleared using a right click context menu in the cells and selecting 'Clear'.
Exporting results
Once the parameters are as you'd like them (see the rest of this document on the available settings) the applications is ready to create results for you. There are two paths available to do this both available via export on the file menu.
The first export is an ini file. This is designed to be cut and paste into the existing ini file to specialize it for the given settings. It can also be applied using Devware: 'Open Additional Presets' on the File menu. This file can be created for AP0102, AP0200, AP0201 & AP0202. If multiple transforms are defined, the application prompt the user to select one for export. This is designed to replicate functionality found both in Register Wizard and STE.
The second export is an xml export designed to be used with flashtool. It contains just the blob and register settings. This file can be created for AP0102, AP0200 only. If multiple transforms are defined, the application prompt the user to select one for export.
Configuration
This section describes the first tab in the interface. With the AP0201 and AP0202 this is the only tab. It allows the user to describe the hardware, give the active image sizes, desired pixel formats and alike and it will calculate a timing solution if one can be found.
Hardware
If the application is launched as a plugin inside Devware the sensor and companion part names and revision numbers are normally automatically found. Changing them from these settings will cause the hardware to be off-line. Resetting them will allow the hardware to be written too again.
Specifying the part named and revision numbers is done on three lines. If the system is a stacked or SOC and contains a both a companion and sensor (eg AS0142AT) then it is selected on the first line inside the system box. This will automatically fill in the sensor & companion processor and may fix other parts of the UI too. Currently AS0142 and AS0147 are supported.
If the sensor and companion are independent then they are selected on the Sensor and Companion boxes. These will reset the system part to <Selected Independently> when modified. Please note that also it's possible to investigate the performance any combination of parts not all pairs have an ini file and are fully supported.
The input clock frequencies are specified on these line too. In the case above the sensor clock is being driven from the companion at 27MHz. If the sensor has its own clock an independent clock can be selected and then the frequency set. Note, even if the companion and sensor have the same frequency clock but they are independent it's important to specify this. Different timing solutions are used. See the timing selection for details.
Active frame sizes
There are either two or three active frame sizes. Sensor to the companion, the color-pipe to the Ste unit for companions with the Ste unit and the output of the companion chip. Getting the best settings is straightforward if there is no STE. The client is likely to specify the output resolution. Ideally you want to maximize the sensor resolution for a given framerate.
If you have a companion with an STE then by default the color pipe downscaling will be disabled. There are two good reasons to downscale.
- If you have a transform that uses too much pixel buffer memory and has no causal solution.
- If you have a transform that does a lot of downscaling a better quality result can be obtain by downscaling in the color pipe prior to resampling done in the Ste unit.
The active sensor output can also be flipped or mirrored using the check boxes.
Generally, the active sensor frame size is slightly smaller than the available frame size. By default the pixels used are used from the center of the physical sensor. The location button can be used to adjust this default. Positive values for the horizontal offset will move the active location to the right. Positive values for the vertical offset will move the active area down. If you are using lens correction in the Ste transform the lens will center on the physical center of the full sensor area in effect ignoring this setting. It is possible to adjust the lens center separately. See the lens correction section.
Sensor to companion protocol
This controls the way the data is transferred from sensor to the companion chip. Certain sensors may have this option grayed out. This will be because they only support one option. The intent here is to add mipi when the AP0300 is added to the UI.
Exposure Control
When using a controlling a sensor through a companion chip the exposure modes are limited to three currently. These are selectable via the pull down. The different mode often effect the minimum VBlank and HBlank and can change the timing. Linear mode has a single exposure. HDR modes have three exposures combined using different algorithms.
Output Protocol (Parallel: AP0102, AP0202)
The type of output can be specified using the Pixel format and Protocol. The pixel format can be specified as YCbCr 16bit for example and this pixel can be output as one 16bit word in 1 clock / pixel mode or 2 bytes in 2 clocks / pixel mode. Some of these output formats do not function correctly with the demo3 card as only 16 bits are read, not 24.
Output Protocol Option (Keep-sync)
There are options on the parallel mode to enable the keep-sync module. Without keep sync the vblank and hblank of the output frame are slightly variable. Keep-sync forces an exact heartbeat on the output video and allows both the hblank and vblank to be specified.
Often when keep-sync is first enabled no timing solution will be available for the settings. A good approach is to turn off keep-sync and note down the hblank and vblank generated without keep-sync and use these as starting values. It should be noted that the whole system is much fussier with keep-sync enabled and some sensors work better than others. Contact us if you get problems.
The active video can be positioned in the video frame by setting a top vBlank and left hBlank. Only the sum of the left and right hblank and the sum of the top and bottom vblank effect the timing. One can get a timing solution by setting one of the pairs and then adjust the distribution once a satisfactory timing solution is attended.
Output Protocol (Net: AP0200, AP0201)
The net output control can be H264 or Jpeg written over different net bandwidths at different utilization rates. The utilization rate indicates the maximum net bandwidth used. 100% would exclude all other traffic. Note that in H264 processing bandwidth is limited. High resolution and high frame rates often sometimes fail as the unit can't keep up.
When using protocols which are not MII(25MHz) the host often loses communications with the companion chip after a solution is written. Be prepared to reset the hardware. The old STE didn't suffer from this problem because it actually didn't write the settings.
Constraints
If the default timing is not the way you like it you can use constraints to modify it.
The default timing generally tries to minimize Hblanks, VBlanks and frequencies. Setting these values allows you to modify the default result. It's easy to forget you have these set. If you aren't getting timing solutions when you think you should make sure these are cleared.
Basic Timing
There are many ways to configure the 200 series timing. The following diagram shows the major timing paths. Some paths are fixed from the hardware configuration and some can be chosen at runtime to optimize the chip timing and power usage.
Keep-syncSensorPLLPLL0PLL1External ClockP2P1Sensor
CoreColor
PipeSteTxssJpeg/H264External Clock
The sensor can be driven in three ways: from the companion external clock, from the companion PLL0:P2 output, or from its own clock. The output can be configured in five ways: Txss from PLL1, Txss from PLL0:P2, Keep-sync from PLL1, Keep-sync from PLL0:P2, Jpeg/H264 from PLL1. This gives fifteen configurations. Two of these aren't really viable as the PLL0:P2 is shared between the sensor and the output. That leaves thirteen configurations. The Companion Manage supports six of these as shown in the following table.
The reasons for picking these modes are complex. Driving Txss or Keep-sync from PLL1 allows finer tuning of the output frequency but uses more power. Development resources. More modes could be added over time.
1 | Sensor: Companion Clock | Txss from PLL0:P2 | Supported AP0102, AP0202 |
2 | Sensor: Companion Clock | Txss from PLL1 | Not supported |
3 | Sensor: Companion Clock | Keep-sync from PLL0:P2 | Supported AP0102, AP0202 |
4 | Sensor: Companion Clock | Keep-sync from PLL1 | Not supported |
5 | Sensor: Companion Clock | Jpeg/H264 from PLL1 | Supported AP0200, AP0201 |
6 | Sensor: Own Clock | Txss from PLL0:P2 | Supported AP0102, AP0202 |
7 | Sensor: Own Clock | Txss from PLL1 | Not supported |
8 | Sensor: Own Clock | Keep-sync from PLL0:P2 | Supported AP0102, AP0202 |
9 | Sensor: Own Clock | Keep-sync from PLL1 | Not supported |
10 | Sensor: Own Clock | Jpeg/H264 from PLL1 | Supported AP0200, AP0201 |
11 | Sensor: PLL0:P2 | Txss from PLL1 | Not supported |
12 | Sensor: PLL0:P2 | Keep-sync from PLL1 | Not supported |
13 | Sensor: PLL0:P2 | Jpeg/H264 from PLL1 | Not supported |
Table 1
The following is an example of the timing display for mode 1 from table 1.
This is the simplest case. This is the key
- The Sensor has an active pixel area of 1280x960, a Vblank of 30, an HBlank of 370 and a total image size of 1650x990.
- The Sensor has an external clock of 27MHz, M=49, N=3 which gives a VCO frequency of 441MHz. The pixel clock is running at 49MHz. P1 is 1, P2 is 9.
- The Companion is output an image 1280x960. (You can't tell from this but keep sync is not running so the VBlank and HBlank values are estimates).
- The Companion has an external clock of 27MHz, M=98, N=3 which gives a VCO frequency of 882MHz. The pixel clock is running at 49MHz. P1 is 18
Note: This display never shows the register settings of P1, P2, N, M etc.; it shows their effective value. The register setting for N would normally be N-1. The actual values set in the registers can be seen in the register view window under the view menu.
Parallel Timing
This is a typical output from a parallel timing (Mode 1 or 3 from the table 1)
There is one extra line for the sensor. The when HiSpi is enabled the sensor will show the HiSpi clock.
There are two lines for the companion. The input pixel frequency is 37.125MHz from the sensor. The output frequency is 29.7MHz. This reduction in the frequency in the companion has been done by reducing the HBlank of the output. Switching to 2 clocks / pixel output will often just double the output frequency.
Net Timing
This is a typical output from a net timing. (Mode 5 from the table 1)
The companion vblank and hblank have little meaning in this context. The new lines are the Ethernet settings. PLL1 is used to drive the net output. Note that 'M' = 24.0741. This is the effective 'M' value built from the both the fractional register value and the integer value of M. It is used when a PLL setting needs to be fine-tuned.
Timing, Sensor has own clock
This is a typical output from timing a parallel system where the sensor has its own independent clock. (Mode 6 from the table 1)
The output pixel rate of the sensor is 37.125 MHz. The input pixel clock for the Companion is 37.1218MHz. Fractional PLL tuning on the companion processor has been used so that pixel clock rate is very slightly slower. It has to be this way. The two clocks will drift a little relative to one another and the companion processor can slow down the sensor data a little but it can't speed it up. The rate of slowdown is about 1/12 of an active line per frame.
Spatial Transform Engine Control
This section is designed to discuss the configuring the Spatial Transform Engine for the AP102 and AP0202. It is not intended to educate on what the Spatial Transform Engine can do. This is covered in other papers.
Transforms
By default the transforms tab should look like this. The STEW is capable of creating and applying multiple different transforms. Pressing the add button creates a new transform. New transforms can also be created by selecting (left click on it away from the buttons) and duplicating. Transforms that are no longer needed can be selected and deleted. Transforms can be named by typing into the box on the left. It is easy to lose track, naming is recommended.
The transforms have four buttons. They can be accessed in any order but generally a transform is built using 'Define' to create the geometry of the transform, 'Layout' adjust the transform resolution, 'Preview' to check the result and 'Write' to send it the hardware. Note that the green background means the transform is good. A red background indicates a problem, normally that it will cause an overflow of the pixel buffer.
Transform Preview (Simulated output)
To see the effects of the 'Define' it's a good idea to have the preview launched and running. Pressing the Preview should create a window like this:
Use the load button to load any image. The transform will be applied and you should see a result similar to this:
The image on the left will be scaled to simulate the output of the color pipe. The image on the right is the simulated result of the transform. The Orange line represents the region accessed by the transform and the blue cells show the piecewise linear tiles that form the transform. The outline and grid buttons can be used to switch the overlays off. This is a live display, changes applied by the 'Define' dialog will be shown immediately. (Note the Simulated Error tab is described later).
Transform Definition
The define dialog is launched using the 'Define' button. It is possible to launch multiple define dialogs for different transforms and interact with the rest of the application while they are present. By default, the define dialog will be a simple De-warp work-flow. Different work flows can be selected at the top of the page. These are likely to be added to over time. Some work-flows allow multiple panels and some do not. This check box will gray out if it's not available.
Changes made to the UI are not applied immediately. When a change is made the 'Apply' button will become active and can be used to apply the changes to the model.
Transform Definition (De-warp)
A de-warp work-flow undoes the effects of a lens. The lens model is described later in the document and is shared across all de-warp transforms.
This is the effect of zoom: Top to bottom Zoom = 0.5, 1.0 & 2.0.
This is the effect of the aspect ratio: Top to bottom Aspect = 0.5, 1.0 & 2.0.
The rotation of axis control is straight forward, the image will anti-clockwise rotate around its center for positive numbers. The tilt and pan are harder to show. The image would normally be generated with the axis of the lens at the center of the image. Tilt and pan allow the image to be looking through the sides, top or bottom of the lens. A positive pan value looks to the right. A positive tilt value looks up.
Projection surfaces allow the image go from flat to spherical. A normal de-warp projects the lens distortion onto a flat surface. This can turn a 'fish eye' image to something more pleasing to the human eye. It can suffer from very low resolution near the edges and a loss of the field of view. Using slightly curved projection surfaces can create image with better fields of view but still looks good to the human eye. The custom option allows slightly curved projection surfaces.
This is the effect of the horizontal curvature: Top to bottom = 0%, 40% & 80%.
Transform Definition (Multi-panel Triptych)
To create a multiple panels for a de-warp check the Multi-panel (Triptych) check box. Bring the additional tab to the front. You should get this dialog.
This again effects the projection plane of the de-warp. The projection plane is folded along two crease lines. This UI allows the user to position the crease lines, decide on the width of the line drawn at the crease lines and the angle of the fold. You can type or drag the lines. You can force the solution to be symmetric. Result will be similar to this:
Transform Definition (Stretch / 2d scale)
Using the 'Select work-flow' option '2d scale' the following UI is presented:
This very basic UI defaults to fitting the input image from sensor to the output image. Say the input image is 1280x960 and the output is 1280x720. The transform will scale the image 1:1 in the horizontal direction and 720/960 or ¾ in the vertical direction. The rotation and zoom are applied on top of this.
Transform Preview (Simulated error)
To see the effects of the 'Layout' it's a good idea to have the preview launched and running in simulated error mode. Pressing the 'Preview' button should create this window. You don't need to load an image, just select the simulated error tab on the right and you should get something like this:
This shows the error in the transform. The transform has an exact analytical solution but the hardware uses a series of piecewise linear tiles to approximate it. It also uses fixed point arithmetic. This display shows the difference between the analytical solution and the solution the hardware is using. This is a straight de-warp using a tile size of 64x64 and you can see it's a maximum of 1.3 pixel out. This display is live and will update as changes are applied.
Transform Layout
Use the 'Layout' button to launch the following dialog:
The transform is implemented using a piecewise linear tiles. This UI controls the sizes of the tiles. Using more tiles will increase the accuracy of the transform but will take longer to download and will use more memory. A rough idea of the size of the transform is given in the 'Sampling resolution'. An exact size can be seen by launching the register view and watching the value of 'TOTAL LENGTH'. This is the size in words after compression and a header has been added.
The tile size is controlled by making selections in the 'Select sampling' box. Selecting a size of 32x64 or 64x32 uses roughly the same transform memory so which is best? Here are the two settings and the associated simulated error:
Note the maximum error. 32x64 the maximum error is 1.1 pixel but the maximum error is only 0.7 pixels for 64x32. As they use about the same memory so 64x32 is better for this transform.
The origin of the cells can be controlled using the Offsets. This defaults to centering the tiles on the image. I don't believe there is any advantage for setting it another way but it may be dependent upon the transform.
Left and right black boarders may also be controlled in this dialog.
Causal Delay
The STE requires semi-random access to the frame. Ideally, one would have two full framebuffers, one being built by the color pipe and one being accessed by the STE. Cost and latency consideration have resulted in relatively small shared buffer (called the pixel buffer) between the STE and color pipe. The STE must be started sometime after the color pipe starts to fill the pixel buffer in order that all the data the STE need will be available. This delay is known as the causal delay. This is calculated for you.
This shows two transforms, one with and one without a solution. The main display will immediately show a red or green background for the transform. A transform with a red background will still be written to hardware but is likely to create artifacts (The application still calculates the best causal delay it can).
If you wish to know more details than that or adjust the causal delay by hand then you can press the Causal Delay(s) button and a dialog will come up. This will create a timing diagram for each transform. The left hand side is the start of the frame, the right hand side is the end and shows the vertical blank. In the case where there is a solution a green bar shows the earliest and latest that the causal delay can be set to without artifacts. The shorter this bar is the closer the transform is to using all the pixel buffer. The red line vertical shows the actual location of the causal delay used. It is, by default, set the 10% from the start. You can experiment with modifying this by changing the value on the right. This number is the number of pixels from the start of the frame. This is not the value written in the blob as the hardware has clocking factors and datum's that need to be taken into account.
If the default is adjusted it will be used even if the transform is modified. A reset will cause the default to be used and will update every time the transforms is modified. This is a live window. Changes made can immediately be written to hardware without closing this window. Changes made to the transform can also be seen immediately.
Lens Correction
This section describes the lens correction functionally. This is use by all the de-warp workflows. It is shared.
Lens Model
The lens correction requires a model of the lens. A simple model is used convert angle of incidence of a light source to a distance from the center of the lens. A cubic hermite spline is used in interpolate a series of angle-distance pairs. The table on the right hand side shows these pairs.
Lens Model Units
The distance in the model can have two different units. It can be in pixels or µm. This is selected in the Length units switch. The current pixels size is used for any conversion. If you import in a model in pixels. Set the pixel size (say 2.0: get it from the sensor datasheet) and switch units the table will be convert to show the new lengths. The above table was convert to µm assuming a 2µm sensor pixel size. By saving lens models in µm they can be made independent of the sensor they are attached to. If you import a model in µm it's important to change to µm before it's imported. If you don't do this it may have a conversion applied when you aren't expecting one.
Lens Model Center
The center of the lens is by default assumed to be the center of the physical active pixels on the sensor. The default may be adjusted by modifying the 'Lens Offset' box. Positive values in the 'Horizontal' field will result in the center being moved right, positive values in the 'Vertical' field will result in the center moving up.
Built in Lens Models
There are a few known lens built in and some algorithm's to create analytical lens models too. The known lens models are in pixels. Please set this before selecting one so that at inadvertent conversion does not occur. This lens models are for specific sensors pixel sizes.
Algorithm's to create analytical lens models are found on the second tab. There are five different types of projection and these are described well on the web. To use them select an algorithm and focal length and press apply and the table will be overwritten.
Editing Lens Models
Any fields can be directly edited. Click on any cell and edit it. It easy to enter a bad lens model. The expectation is that both the radii and angle will rise. There should be at least two points and the first point should be (0,0). It is possible to add valid data pairs that are not in order but it's not recommended. (They will be sorted before they are modelled by the application). Any row can be selected and removed. A row can be added after any row. A poor lens model can lead to artifacts. Check the Transform Preview (Simulated Error) to look for lens model problems.
Import and Export of Lens Models
Lens can be imported and exported list of comma separated pairs. No information as the units are stored. This is to match historical file formats. It is recommend that the units are encoded into the file name:
Sunex DLS219-AR0140.cvs
Sunex DLS219-um.cvs
The lens information can also be imported from a STE xml configuration file but can't be exported in this form.
Command Line Tool
User Guide is available here.
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
2018 Oct 15 | John Neave | First draft. |
2018 Oct 31 | John Neave | Second draft |
2020 Mar 13 | John Medalen | Added Command Line Too, and TOC |