Roger Stone

Cookies are disabled

IPSec Authentication and Authorization Models
So C is not required. Packets matching a portal-free rule will not trigger portal authentication, so users sending the packets can directly access the specified external websites. RV sessions can be saved out as. Hewlett Packard Enterprise recommends using the IP address of a loopback interface rather than a physical Layer 3 interface, because: Make sure the VLAN already exists.

Geolocation Map

IP Details for

For example, the filenames foo. RV sorts images by frame numbers in numeric order. It sorts image base names in lexical order. What this means is that RV will sort images into sequences the way you expect it to. Padding tricks are unnecessary for RV to get the image order correct; image order will be interpreted correctly. To play this image sequence in RV from the command line, you could start RV like this:.

The first example above plays all frames in the foo sequence, the second line plays frames starting at frame 2 through frame 8. The next two examples use the printf-like syntax accepted by nuke. In the first case, the entire frame range is specified with the assumption that the frame numbers will be padded with 0 up to four characters this notation will also work with 6 or other amounts of padding. In the final two examples, the range is limited to frames 2 through 8, and the range is passed as a separate argument.

Sometimes, you will encounter or create an image sequence which is purposefully missing frames. For example, you may be rendering something that is very expensive to render and only wish to look at every tenth frame. In these examples, RV will play frames 1 through by tens.

So it will expect frames 1, 11, 21, 31, 41, on up to If there is no obvious increment, but the frames need to be group into a sequence, you can list the frame numbers with commas:. In many cases, RV can detect file types automatically even if a file extension is not present or is mis-labeled.

Use the same format for exporting multiple annotated frames. RV can handle negative frames in image sequences. If the frame numbers are zero padded, they should look like so:.

To specify in and out points on the command line in the presence of negative frames, just include the minus signs:. The second example uses frames to These should be set to a colon separated list of values even on windows.

For example, the defaults would look like this:. So for example, if you have two image sequences:. You can create source material from multiple audio, image, and movie files from the command line by enclosing them in square brackets. The typical usage looks something like this:. Note that there are spaces around the brackets: You cannot nest brackets and the brackets must be matched; for every begin bracket there must be an end bracket.

Frequently a movie file or image sequence needs to be viewed with one or more separate audio files. When you have multiple layers on the command line and one or more of the layers are audio files, RV will play back the all of the audio files mixed together along with the images or movies.

For example, to play back a two wav files with an image sequence:. If you have a movie file which already has audio you can still add additional audio files to be played:. It's not unusual to render left and right eyes separately and want to view them as stereo together. When you give RV multiple layers of movie files or image sequences, it uses the first two as the left and right eyes. It's OK to mix and match formats with layers:.

As with the mono case, any number of audio files can be added: There are a few arguments which can be applied with in the square brackets See Table 3. The range start sets the first frame number to its argument; so for example to set the start frame of a movie file with or without a time code track so that it starts at frame You must use the square brackets to set per-source arguments and the square brackets must be surrounded by spaces.

The -in and -out per-source arguments are an easy way to create an EDL on the command line, even when playing movie files. The point of the -fps arg is to provide a scaling factor in cases where the frame rate of the media cannot be determined and you want to play an audio file with it. For example, if you want to play "[foo. This will ensure that the video and audio are synced properly no matter what frame rate you use for playback. To clarify further, the per-source -fps flag has no relation to the frame rate that is used for playback, and in general RV plays media all loaded media at whatever single frame rate is currently in use.

There are a number of things you should be aware of when using source layers. In most cases, RV will attempt to do something with what you give it.

However, if your input is logically ambiguous the result may be different than what you expect. Here are some things you should avoid using in layers of a single source:.

Here are some things that are OK to do with layers:. If you give RV the name of a directory instead of a single file or an image sequence it will attempt to interpret the contents of the directory. RV will find any single images, image sequences, or single movie files that it can and present them as individual source movies.

If you navigate in a shell to a directory that contains an image sequence for example, you need only type the following to play it:. You don't even need to get a directory listing. If RV finds multiple sequences or a sequence and movie files, it will sort and organize them into a playlist automatically. RV will attempt to read files without extensions if they look like image files for example the file ends in a number. If RV is unable to parse the contents of a directory correctly, you will need to specify the image sequences directly to force it to read them.

The goal of RV's user interface is to be minimal in appearance, but complete in function. By default, RV starts with no visible interface other than a menu bar and the timeline, and even these can be turned off from the command line or preferences. While its appearance is minimal, its interaction is not: RV can have prefix-keys that when pressed remap the entire keyboard or mouse bindings or both.

The main menu and pop-up menus allow access to most functions and provide hot keys where available. RV makes one window per session. Each window has two main components: On Linux and Windows, each RV window has its own attached menu bar.

A single RV process can control multiple independent sessions on all platforms. On the Mac and Windows, it is common that there be only a single instance of RV running. On Linux it is common to have multiple separate RV processes running. Many of the tools that RV provides are heads up widgets. The widgets live in the image display are or are connected to the image itself.

Aside from uniformity across platforms, the reason we have opted for this style of interface was primarily to make RV function well when in full screen mode. RV provides feedback about its current state near the top left corner of the window. Feedback widget indicating full-color display. The main RV window has two toolbars which are visible by default. The upper toolbar controls which view is displayed, viewing options, and current display device settings.

The lower toolbar control play back, has tool buttons to show more functions, and audio controls. The lower toolbar is in three sections from left to right: The tool launch buttons toggle rv's main user interface components like the session manager or the heads-up timeline. The play controls control play back in the current view. These are similar to the heads-up play controls available from the timeline configuration.

The loop mode determines what happens at the end of the timeline and the audio controls modify the volume and mute. The frame content, display device settings, channel view, and stereo mode on the top tool bar are also available under the View menu. See Chapter 7 for more information on what these settings do. The full screen toggle is also under the Window menu.

You can toggle the visibility for each of the tool bars under the Tools menu. There are two options for loading images, sequences, and movies via the file browser: In the file browser, you may choose multiple files. RV tries to detect image sequences from the names of image files movie files are treated as individual sequences.

If RV detects a pattern, it will create an image sequence for each unique pattern. If no pattern is found, each individual image will be its own sequence. Audio files can be loaded into RV using the file browser. The first sequence in the layer will determine the overall length of the source, e. Any number of Audio files can be added as layers to the same source and they will be mixed together on playback. The file browser has three file display modes: Sequences of images appear as virtual directories in the file browser: In general the File Details view will be the fastest.

File Browser Show File Details. File Dialog in Column View Mode. File Dialog Showing Media Details. Favorite locations can be remembered by dragging directories from the main part of the file browser to the side bar on the left side of the dialog box. Recent items and places can be found under the path combo box.

You can configure the way the browser uses icons from the preferences drop down menu on the upper right of the browser window. On all platforms, you can drag and drop file and folder icons into the RV window. RV will correctly interpret sequences that are dropped either as multiple files or inside of a directory folder that is dropped.

RV uses smart drop targets to give you control over how files are loaded into RV. You can drop files as a source or as a layer.

As you drag the icons over the RV window, the drop targets will appear. Just drop onto the appropriate one. It is possible with these desktops or any that supports the XDnD protocol to drag file icons onto an RV window. If multiple icons are dropped onto RV at the same time, the order in which the sequences are loaded is undefined.

To associate an audio file with an image sequence or movie, drop the audio file as a layer, rather than as a source. RV normalizes image geometry to fit into its viewing window.

So, for example, if these images are viewed as a sequence — one after another — the smaller of the two images will be scaled to fit the larger.

Of course, if you zoom in on a high-resolution image, you will see detail compared to a lower-resolution image. When necessary you can view the image scaled so that one image pixel is mapped to each display pixel. On startup, RV will attempt to size the window to map each pixel to a display pixel, but if that is not possible, it will settle on a smaller size that fits. You can always set the scale to 1: You can manipulate the Pan and Zoom of the image using the mouse or the row of number keys or the keypad on an extended keyboard if numlock is on.

By holding down the control key apple key on a mac and the left mouse button you can zoom the image in or out by moving to the left and right. By holding down the alt key or option key on a mac you can pan the image in any direction. If you are accustomed to Maya camera bindings, you can use those as well. To frame the image — automatically pan and zoom it to fit the current window dimensions — hit the 'f' key. If the image has a rotation, it will remain rotated.

The pixel inspector widget can be accessed from the Toools menu or by holding down Shift and clicking the left mouse button. The inspector will appear showing you the source pixel value at that point on the image see Figure cap: If you drag the mouse around over the image while holding down the shift key the inspector widget will also show you an average value see Figure cap: You can move the widget by clicking on it and dragging.

To remove inspector widget from the view either movie the mouse to the top left corner of the widget and click on the close button that appears or toggle the display with the Tools menu item or hot key.

The inspector widget is locked to the image. If you pan, zoom, flip, flop, or rotate the image, the inspector will continue to point to the last pixel read. If you play a sequence of images with the inspector active, it will average the pixel values over time. If you drag the inspector while playback occurs it will average over time and space.

RV shows either the source pixel values or the final rendered values. The source value represents the value of the pixel just after it is read from the file. No transforms have been done on the pixel value at that point. You can see the final pixel color the value after rendering by changing the pixel view to the final pixel value from the right-click popup menu. The value is normalized if the image is stored as non-floating point — so values in these types of images will be restricted to the [0,1] range.

Floating point images pass the value through unchanged so pixels can take values below zero or above one. From the right-click popup menu it's possible to view the pixel values as normalized to the [0,1] range or as 8, 16, 10, or 12 bit integer values. Wipes allow you to compare two or more images or sequences when viewing a stack. Load the images or sequences that you wish to compare into RV as sources not as layers. Now you can grab on the edges of the top image and wipe them back to reveal the image below.

You can grab any edge or corner, and you can move the entire wipe around by grabbing it in the exact center.

Also, by clicking on the icon that appears at the center or corner of a wipe or via the Wipes menu, you can enable the wipe information mode, that will indicate which edge you are about to grab.

Wipes can be used with any number of sources. RV has a special UI mode for editing the parameters such as color corrections, volume, and image rotation.

When editing parameters, the mouse and keyboard are bound to a different set of functions. On exiting the editing mode, the mouse and keyboard revert to the usual bindings. To edit the parameter value using the mouse you can either scrub like a virtual slider or use the wheel. If you want to eyeball it, hold the left mouse down and scrub left and right. By default, when you release the button, the edit mode will be finished, so if you want to make further changes you need to re-enter the edit mode.

The scroll wheel increments and decrements the parameter value by a predefined amount. Unlike scrubbing with the left mouse button, the scroll wheel will not exit the edit mode.

When multiple Sources are visible, as in a Layout view, parameter sliders will affect all Sources. Or you can use 's' to select only the source under the pointer for editing. You can exit the edit mode by hitting the escape key or space bar or most other keys. To change the parameter value using the keyboard, hit the Enter or Return key; RV will prompt you for the value.

The parameter is incremented and decremented. To end the keyboard interactive edit, hit the Escape or Spacebar keys. When image pixels are scaled to be larger or smaller than display pixels, resampling occurs. When the image is scaled zoomed RV provides two resampling methods filters: You can see the effects of the resampling filters by making the scale greater than 1: Nearest Neighbor and Linear Interpolation Filtering.

Nearest neighbor filtering makes pixels into blocks helpful in trying to determine an exact pixel value. It's important to know about image filtering because of the way in which RV uses the graphics hardware. When an image is resampled—as it is when zoomed in—and the resampling method produces interpolated pixel values, correct results are really only obtained if the image is in linear space.

Because of the way in which the graphics card operates, image resizing occurs before operations on color. This sequence can lead to odd results with non-linear space images if the linear filter is used e.

The results are only incorrect if you meant to do something else! There are two solutions to the problem: The only downside with the second method is that the transform must happen in software which is usually not as fast. Of course this only applies to images that are not already in linear space.

Why does RV default to the linear filter? Most of the time, images and movies come from file formats that store pixel values in linear scene-referred space so this default is not an issue.

It also looks better. The important thing is to be aware of the issue. If RV is displaying floating point data directly, linear filtering may not occur even though it is enabled. This is a limitation of some graphics cards that will probably be remedied via driver update or new hardware in the near future.

Many graphics can do filtering on 16 bit floating point images but cannot do filtering on 32 bit floating point images. RV automatically detects the cards capabilities and will turn off filtering for images if necessary. Floating point linear, 8 bit linear, and 8 bit nearest neighbor filtering.

Graphics hardware does not always correctly apply linear filtering to floating point images. Filtering can dramatically change the appearance of certain types of images. In this case, the image is composed of dense lines and is zoomed out scaled down. RV can display any size image as long as it can fit into your computer's memory. When an image is larger than the graphics card can handle, RV will tile the image display.

This makes it possible to send all the pixels of the image to the card for display. The downside is that all of the pixels are sent to the display even though you probably can't see them all. One of the constraints that determines how big an image can be before RV will tile it is the amount of available memory in your graphics card and limitations of the graphics card driver.

On most systems, up to 2k by 2k images can be displayed without tiling as long as the image has 8-bit integer channels. In some cases newer cards the limit is 4k by 4k. However, there are other factors that may reduce the limit. For sequences, this may affect playback speed since tiling is slightly less efficient than not tiling.

Tiling also affects interactive speed on single images; if tiling is not on, RV can keep all of the image pixels on the graphics card. If tiling is on, RV has to send the pixels every time it redraws the image. You can move the widget by clicking and dragging.

The widget shows the geometry and data type of the image as well as associated meta-data attributes in the file. Channel map information—the current mapping of file channels to display channels—is displayed by the info widget as well as the names of channels available in the image file; this display is especially useful when viewing an image with non-RGBA channels.

If the image is part of a sequence or movie the widget will show any relevant data about both the current image as well as the sequence it is a part of. For movie files, the codecs used to compress the movie are also displayed. If the movie file has associated audio data, information about that will also appear. To remove the image information widget from the view either move the mouse to the top left corner of the widget and click on the close button that appears or toggle the display with the Tools menu item or hot key 'i'.

RV can play multiple images, image sequences and movie files as well as associated audio files. Play controls are available via the menus, keyboard, and mouse.

Timeline With Labelled Parts. This timeline shows in and out points, frame count between in and out points, total frames, target fps and current fps. In addition, if there are frame marks, these will appear on the timeline as seen in Figure 4. The current frame appears as a number positioned relative to the start frame of the session. If in and out points are set, the relative frame number will appear at the left side of the timeline — the total number of frames between the in and out points is displayed below the relative frame number.

By clicking anywhere on the timeline, the frame will change. Clicking and dragging will scrub frames, as will rolling the mouse-wheel. You can grab and drag either end, or grab in the middle to drag the whole range. There are two FPS indicators on the timeline. Move current frame to next mark or source boundary, if there are no marks. Move current frame to previous mark or source boundary, if there are no marks. A red dot with a number indicates how many frames RV has lost since the last screen refresh.

The timeline can be configured from its popup menu. Use the right mouse button anywhere on the timeline to show the menu. If you show the popup menu by pointing directly at any part of the timeline, the popup menu will show that frame number, the source media there, and the operations will all be relative to that frame. For example, without changing frames you can set the in and out point or set a mark via the menu. Timeline Configuration Popup Menu. The Configuration menu has a number of options:.

Hide or Show the playback control buttons on the right side of the timeline. This was the default behavior in previous versions of RV. The timeline is now drawn in the margin by default.

Draw the timeline at the top of the view. The default is to draw it at the bottom of the view. When selected, the in and out points will be labeled using the current method for display the frame global, source, or time code. This controls how the arrow keys behave at the in and out point. When selected, the frame will wrap from in to out or vice versa. When selected, the main media file name for the frame under the pointer not the current frame will be shown just above or below the timeline.

When selected, a small triangle next to the current frame indicates the direction playback will occur, when started. Realtime mode when play all frames is not selected uses a realtime clock to determine which frame should be played. When in realtime mode, audio never skips, but the video can. When play all frames is active, RV will never skip frames, but will adjust the audio if the target fps cannot be reached.

When the timeline is visible, skipped frames will be indicated by a small red circle towards the right hand side of the display. The number in the circle is the number of frames skipped. There are two frame ranges associated with each view in an RV Session: The in and out points are always within the range of the start and end frames.

RV sets the start and end frames automatically based on the contents of the view. The in and out points are set to the start and end frames by default. A mark in RV is nothing more than a frame number which can be stored in an RV file for later use. The timeline will show marks if any are present. While not very exciting in and of themselves, marks can be used to build more complex actions in RV. For example, RV has functions to set the in and out points based on marks.

By marking shot boundaries in a movie file, you can quickly loop individual shots without selecting the in and out points for each shot. Marking and associated hot keys for navigating marked regions quickly becomes indispensable for many users. These features make it very easy to navigate around a movie or sequence and loop over part of the timeline. Producers and coordinators who often work with movie files of complete sequences for bidding or for client reviews find it useful to mark up movie at the shot boundaries to make it easy to step through and review each shot.

The timeline magnifier can display the audio waveform of any loaded audio. Note that this is the normalized sum of all audio channels loaded for the given frame range. To preserve interactive speed, the audio data is not rendered into the timeline until that section of the frame range is played. You can turn on Scrubbing , in the Audio menu, to force the entire frame range to be loaded immediately.

Also, if Scrubbing is on, audio will play during scrubbing, and during single frame stepping. All these manipulations can be performed during playback. All the hotkeys mentioned in Table 4. The timeline magnifier configuration menu is also a subset of the regular timeline menu see Figure 4. Timeline Magnifier Configuration Popup Menu. When playing back audio with an image sequence or movie file, RV can be in one of two modes: When a movie with audio plays back at its native speed, the video is locked to the audio stream.

This ensures that the audio and video are in sync. If you change the frame rate of the video, the opposite will occur: When this happens, RV will synthesize new audio based on the existing audio in an attempt to either stretch or compress the playback in time. When pushed to the limits, the audio synthesis can create artifacts e.

RV can handle audio files with any sample rate and can re-sample on the fly to match the output sample rate required by the available audio hardware. Use of mp3 and audio-only AAC files is not supported. Audio settings can be changed using the items on the Audio Menu. Volume, time Offset, and Balance can be edited per source or globally for the session. The RV Preferences Audio tab lets you choose the default audio device and set the initial volume as well as some other technical options that are rarely changed.

For visualizing the audio waveform see Section 4. RV provides audio preferences in the Preferences dialog. The most important audio preference is the choice of the output device from those listed. In practice this will rarely change. Preferences also let you set the initial volume for RV. The option to hold audio open is for use on Linux installations where audio system support is poor see the next section on Linux Audio. The other preferences are there for fine tuning performance in extreme cases of marginal audio hardware or support - they will almost never change.

From RV version 4. This is based on Qt audio. Behringer UCA on all platforms. Audio Preferences on Mac. On the Mac and Windows there is only a single entry in this menu. On Linux, however, there may be many. See Appendix E for details about Linux Audio. Typical output rates are or Hz and 32 bit float or 16 bit integer output format.

Global Audio Offset is the means by which audio data can be time shifted backwards or forwards in time. The effect of this preference is observable in the audio waveform display.

For example, setting the value to 0. It is measured in milliseconds, and defaults to zero. The audio waveform rendered in RV is not affected by the value of this preference since it does not offset the audio data that is cached. The default values are recommend. Ideally these numbers are low. There are very few circumstances in which it's a good idea to turn this off.

On some linux distributions turning this OFF will result in no audio at all after the first play. When on, RV will use the audio hardware clock if one is available otherwise it will use a CPU timer in software. In most cases this should be left ON. RV can usually detect when the audio clock is unstable or inaccurate and switch to the CPU timer automatically. However if playback with audio appears jerky even when caching is on it might be worth turning it off.

It influences the overall AV sync lag, so expect to see different in AV sync readings when the feature is enabled versus disabled. In either case, the AV sync lag can be corrected via the Device Latency preference. Note that this feature is Linux only and available only for the Platform Audio module. It defaults to disabled. Linux presents special challenges for multimedia applications and audio is perhaps the worst case.

RV audio works well on Linux in many cases, but may be limited in others. RV supports special configuration options so that users can get the best audio functionality possible within the limitations of the vintage and flavor of Linux being used.

See the Appendix E for complete details. This would include six, eight or more channel layouts for surround sound speaker systems like 5.

The list of all possible channel layouts that RV supports is described in Appendix J. Note, for SDI audio, stereo, 5. To correct for an AV sync lag, first measure the delay with an AV sync meter.

Then input the number from the meter into the Device Latency preference. The AV sync measurement can be influenced by the following audio preferences or playback settings: To generate a sync flash sequence for use in measuring the AV sync at a particular frame rate, the following RV command line can be used.

RV has a three state cache: Timeline Showing Cache Progress. The region cache reads frames starting at the in point and attempts to fill the cache up to the out point.

If there is not enough room in the cache, RV will stop caching. The region cache can be toggled on or off from the Tools menu or by using the shift-C hot key. Look-ahead caching can be activated from the Tools menu or by using the meta-l hot key. The look-ahead cache attempts to smooth out playback by pre-caching frames right before they are played. If RV can read the files from disk at close to the frame rate, this is the best caching mode.

If playback catches up to the look-ahead cache, playback will be paused until the cache is filled or for a length of time specified in the Caching preferences. At that point playback will resume. RV caches frames asynchronously in the background. If you change frames while RV is caching it will attempt to load the requested frame as soon as possible. If the timeline widget is visible, cached regions will appear as a dark green stripe along the length of the widget. The stripe darkens to blue as the cache fills.

The progress of the caching can be monitored using the timeline. On machines with multiple processors or cores the caching is done in one or more completely separate threads. Note that there is usually no advantage to setting the lookahead cache size to something large if playback does not overtake the caching, a small lookahead cache is sufficient, and if it does, you probably want to use region caching anyway.

RV provides users with fine grained color management and can support various color management scenarios. Note that there is no CDL slot for the display by default. RV's color transforms are separated into two menus. The Color menu contains transforms that will be applied to an individual source whichever source is current in the timeline and the View menu contains transforms that will be applied to all of the sources.

This provides the opportunity to bring diverse sources say Cineon Log files, QuickTime sRGB movies, and linear-light Exr's all into a common working color space typically linear and then to apply a common output transform to get the pixels to the display. RV supports playback of stereoscopic source material.

RV has two methods for handling stereo source material: Left and right layers do not need to be the same resolution or format because RV always conforms media to fit the display window; Second, RV supports stereo QuickTime movies taking the first two video tracks as left and right eyes and multi-view EXR files.

RVIO can author stereo QuickTime movies and multi-view EXR files as well, so a complete stereo publishing and review pipeline can be built with these tools. See the section on 12 for more information about how stereo is handled. Key and mouse bindings as well as menu bar menus are loaded at run time. Isolate red, green, blue, or alpha channel. Rotate Image 90 deg to the right clockwise. Rotate Image 90 deg to the left counter-clockwise. Commonly Used Key Bindings. Mouse button 1 is normally the left mouse button and button 3 is normally the right button on two button mice.

Button 2 is either the middle mouse button or activated by pushing the scroll wheel on mice that have them. RV stores configuration information in a preferences file in the user home directory. Each platform has a different location and possibly a different format for the file. Each viewer window represents an RV session. A session is composed of one or more source movies, frame markers, image transforms, color corrections, and interactive states like caching and playback speed.

The source movies are combined according to the session type. An RV session can be saved as an. If you change source material on disk and load an.

Tools that operate on GTO files can be used on. A session is represented internally as a Directed Acyclic Graph DAG in which images and audio pass from the leaves to the root where they are rendered. Each node in the DAG can have a number of parameters or state variables which control its behavior. RV's user interface is essentially a controller which simply changes these parameters and state variables.

A description of each of the node types can be found in the Reference Manual. Other GTO tools like the python module can be used to edit. The DAG nodes that are visible in the user interface are called Views. In addition to any Sources you've loaded, the three views that all sessions have are the Default Sequence, which shows you all your sources in order, the Default Stack, which shows you all your sources stacked on top of one another, and the Default Layout, which has all the sources arranged in a grid or a column, row, or any other custom layout of your own design.

Whenever a Source is added to the session, it is automatically added to the inputs of each of the default views, not to user-defined views.

The session manager shows an outline of the session contents from which you can create, modify, and edit new sequences, stacks, layouts, and more. The session manager interface is in two parts: By double clicking on an icon in the top portion of the session manager you can switch to another view.

By default RV will create a default sequence, stack, and layout which includes all of the sources in the session. When a new source is added, these will be automatically updated to include the new source. The Add View and Folders Menus. A new view can be created via the add view menu. Anything selected in the session outline becomes a member input of the newly created view. Alternately you can create a view and then add or subtract from it afterwards. The top items in the view create new views from existing views.

The bottom items create new sources which can be used in other views. Folder views can be created either from the add menu or the folders menu. The folders menu lets you create a folder from existing views or with copies of existing views.

When a view is copied in the session manager, the copy is really just a reference to a single object. You can add to an existing view by first selecting it by double clicking on it, then dragging and dropping items from the session outline into the inputs section of the session manager. Drag and drop of input items makes it possible to rearrange the ordering of a given view.

For example, in a sequence the items are played in the order the appear in the inputs list. By rearranging the items using drag and drop or the up and down arrows in the inputs list you can reorder the sequence.

To remove an item from a view select the item s in the inputs list and hit the delete trash can button to the right of the inputs list. Similarly, the trashcan button in the upper panel well delete a view from the session. The Edit interface for source views is currently used only to adjust editorial information in the future it may provide access to other per-source information like color corrections, LUTs, etc.

The Source Edit Interface. In the session manager, a source can be opened revealing the media it is composed of. When multiple layers or views subcomponents are present in media the session manager will present a radio button interface in one of its columns. Each subcomponent in the media has its own selectable toggle button. When a subcomponent is selected, the source will show only that subcomponent; stereo or any other multiple view effect will be disabled.

You can go back to the default by either double clicking on the media or deselecting the selected subcomponent toggle it off. In addition to restricting the media to one of its subcomponents, the session manager also allows you to build new views which include more than one subcomponent. When RV does this, it creates new temporary sources dedicated to the subcomponent views, layers, or channels that were selected.

These subcomponent sources are placed in their own folder. Layers of single OpenEXR file put into a tiled layout. It's also possible to drag and drop subcomponents into existing view inputs. A Sequence plays it's inputs in order, a Stack layers it's aligned inputs on top of each other, and a Layout arranges it's inputs in a grid, row, column or arbitrary user-determined format.

Some interface is shared by all Group Views:. The Group interface gives you control over the resolution of it's output. During interactive use, RV's resolution invariance means that the aspect ratio is the only important part of the size, but during output with RVIO, this size would be the default output resolution. If Size Determined from Inputs' is checked, the group take it's size from the maximum in each dimension of all it's inputs.

If the size is not being programmatically determined, you can specify any size output in the provided fields. Similarly, the output frame rate can be specified in the Output FPS field. This is the frame rate that is used as the default for any RVIO output of this group, and is also passed to any view for which this group is an input. The output FPS is initialized from the default frame rate of the first input added to the group.

If Retime Inputs to Output FPS is checked, inputs whose native frame rate differs from the group's output fps will be retimed so that they play correctly at the output fps. This is particularly useful in the case of sequences, but also comes up with stacks and layouts, when, for example, you want to compare a matching region of movies with different overall frame ranges.

Sequence, Stack, and Layout. A Sequence view plays back its inputs in the order specified in the Inputs tab of the the Squence interface. The order can be changed by dragging and dropping in the Inputs, or by selecting and using the arrow keys to the right of the list of Inputs.

An input can be removed dropped from the sequence by selecting the input and then clicking the trashcan button. At the moment, the easiest way to do this is to specify cut information for each source that you want to appear in the sequence with the Source view interface described in Section 5.

In this case the order of the inputs determines the stacking order first input on top. The compositing operation used to combine the inputs of the stack can be selected in the Edit interface. Because any or all of the inputs to the Stack may have audio, you can select which you want to hear. Either mix all the audio together the default , play only the audio from the topmost input in the stack, or pick a particular input by name. For example if you stack foo.

If you don't want this behavior and you want the start frames of the inputs to be aligned regardless of their frame numbers, check Align Start Frames. Also note that the Wipes mode is useful when comparing images in a stack. The use of wipes is explained in Section 4. A Layout is just what it sounds like; the inputs are arranged in a grid, column, row, or arbitrary user-defined pattern.

All the interface actions described in Section 5. To determine the arrangement of your layout, choose one of five modes. There are three procedural modes, which will rearrange themselves whenever the inputs are changed or reordered: Packed produces a tightly packed or tiled pattern, Row arranges all the inputs in a horizontal row, and Column arranges the inputs in a vertical column. If you want to position your inputs by hand, select the Manual mode.

In this mode hovering over a given input image will show you a manipulator that can be used to reposition the image by clicking and dragging near the center or scale the image by clicking and dragging the corners. After you have the inputs arranged to your liking, you may want to switch to the Static mode, which will no longer draw the manipulators, and will leave the images in your designated arrangement.

A Switch is a conceptually simpler than the other group views: Only one input is active at a time and both the imagery and audio pass through the switch view.

Otherwise, the switch shares the same output characteristics as the other group nodes resolution, etc. The Retime View takes a single input and alters it's timing, making it faster or slower or offsetting the native frame numbers. For example, to double the length of an input IE make every frame play twice, which will have the effect of slowing the action without changing the frame rate , set the Length Multiplier to 2.

Or to have frame 1 of the input present itself on the output as frame , set the Offset to The Length Multiplier and Offset apply to both the video and audio of the input. If you want to apply an additional scale or offset to just the audio, you can use the Audio Offset and Audio Scale fields. Retime View Edit Interface. Folders are special kind of group view used to manage the contents of the session manager.

Unlike other views in RV, when you create a folder its inputs will appear as a hierarchy in the session manager. You can drag and drop and move and copy views in and out of folders to organize them. They can be used as an input just like any other view so they can be nested, placed in a sequence, stack, or layout and can be manipulated in the inputs interface in the same way other views are. Folders have no display behavior themselves, but they can display their contents as either a switch or a layout.

When a view becomes a member of a folder, it will no longer appear in one of the other categories of the session manager. If a view is removed as a member of a folder, it will once again appear in one of the other categories.

Folders in the Session Manager. You can drag one or more views into a folder in the session manager to make it a member input of the folder. To make a copy of the dragged items hold down the drag copy modified while dragging. On the mac this is the option key, on Windows and Linux use the control key. The session manager will not allow duplication of folder members multiple copies of the same view in a folder although this is not strictly illegal in RV.

Drag and drop can also be used to reorder the folder contents the same way the inputs are reordered. An insertion point will be shown indicating where the item will move to. Presentation mode turns the main RV user interface into a control interface with output going to both it and a second video device. The secondary video device is always full screen. The primary use for presentation mode is multiple people viewing a session together.

Typically a video device is set up once in the preferences and used repeatedly. It's also possible to pass command line arguments to RV to configure and start presentation mode automatically when it launches.

Video devices are configured from the preferences Video tab. The interface is in two parts: Different devices will have different configuration parameters and some devices may not use all of the available ones. Each video device configuration can have a unique display profile associated with it or can be made to use a default device or module profile.

If a custom nodes have been defined and are used in the display color pipeline than those will also be stored in the display profile. The display profile manager can be started from the view menu. This is where profiles can be created and deleted. When a profile is created, the values for the profile are taken from the current display device or you can select another device at creation time.

Most of the view settings for the current device are present under the View menu 1 Custom or alternate nodes inserted into the RVDisplayPipelineGroup and their property values will also be stored in the profile if they are present. If a profile is already assigned to a device, the device name will appear next to the profile in the manager. By selecting a profile and activating the Apply button, you can set the profile on the current view device.

Applying a profile does not cause it to be remembered between sessions. In order to permanently assign a profile use the Video tab in the preferences. There are five arguments which control how presentation mode starts up from the command line:.

Causes the program to start up in presentation mode. Enables audio output to the presentation device if 1 or disables if 0. Note that the forward slash character must separate the device and module names. Forces the use of format for the video format. The format is a string or substring of full description of the video format as the appear in the video module in the preferences.

The first match is used. Forces the use of format for the data format. Like the video format above, the data format string is matched against the full description of the data formats as they appear in the video module in the preferences.

Presentation Mode Command Line Arguments. The command line arguments will override any existing preferences. When the controller display mode is set to Separate Output and Control Rendering you can choose which elements of the user interface are visible on the presentation device.

This includes not only things like the timeline and image info widget, but also whether or not the pointer location should be visible or not. In addition, you can show the actual video settings as an overlay on the display itself in order to verify the format is as expected. You can also control the display of feedback messages and remote sync pointers with items on this menu.

The settings are retained in the preferences. These issues apply only when using the desktop video module. If your distribution has the xrandr binary available and installed, you can manually use that to force the presentation monitor into the proper resolution. When presentation mode starts up, RV will put the control window into a mode that allows tearing of the image in order to ensure that the presentation window will not tear. Be aware that the control window is no longer synced to a monitor.

RV will warn you if your presentation device is set to a monitor that the nVidia driver is not using for vertical sync. In that case you can continue, but tearing will probably occur if the attached monitors are not using identical timings. See the nvidia-settings program to figure out the proper names e. If not, RV will attempt to sync using the appropriate GL extension.

On OS X it's not a known fact, but it appears that vertical sync is timed to the primary monitor. This is the monitor on which the menu bar appears. You can change this via the system display settings Arrangement tab. Ideally, the presentation device will be on the primary monitor. RV will configure the controller display to prevent interfering with the playback of the presentation monitor.

The control device may exhibit tearing or other artifacts during playback. On some versions of OS X, once the controller has entered this mode, it cannot be switched back even after presentation mode has been exited.

On Windows, like OS X, the vertical sync is somewhat of an unknown. However, it appears that like OS X, the primary monitor the one with the start menu is the monitor the sync is derived from.

So ideally, use the primary monitor as the presentation device, but your milage may vary. The Network sees this and responds by closing contact N. This results in the Tip being grounded and the Ground Detector in the phone sees this. If the Network makes the disconnection by opening N and removing ground from the Tip, then the current stops flowing.

If the phone makes the disconnection, then it opens the loop so that the line appears busy until the network removes ground from the tip and the line can return to Idle. An OSI is where both Ground and Battery are removed for a maximum period of ms in between state changes.

There are never less than ms between OSIs. MF 4 and MF 5 are used on tie trunks between PBXs and use multi-frequency tones on the same wires as the voice signal. Tone signalling used on the voice pair between switches located in different cities. Turning this frequency on and off can be used for the signalling or DTMF can be used.

The sequence of events are as follows:. Delay Start is used when the switch equipment is mechanically based and therefore very slow to respond. The sequence of events is as follows:. It also allows extra lines to be added with minimal cost. In addition, Loop Reverse Battery Supervision is used. DID trunks only allow inbound calls, they also gain their battery from the local switch rather than the CO switch.

The extension numbers that require DID are configured in the CO switch which then directs calls to these numbers on to the DID trunk rather than the normal trunk.

If the DID trunk lines are all busy then the caller will receive a busy tone even if the normal trunks are fine. The DID calls cannot be intercepted by the attendant. Quality is affected by a number of factors.

The level of power at which voice is sent and received is important. The following power levels are good guidelines:. The power level needs to be strong enough so as to ensure that the signal is audible at the remote end, but not too strong so that echo results.

The voice provider can adjust power levels to analogue devices. If the signal reaches the switch and there is too much input gain applied, then the signal can be clipped i.

The same is true if the output gain at the remote end is too low or the input gain locally is too low, in this situation even DTMF tones can be missed. Another factor that affects quality is echo.

If the delay between the original sound and the echo is greater than 30ms then this can start to become a problem for most people. The loudness of an echo is also very important. Two wires are used for all signalling and the voice in the local loop voice receive and transmit occur on the same pair , however this converts to 4 wires for the voice signal and other wires for the signalling between switches.

When the voice is converted from two wires to four wires then there is a chance of Electrical Echo reflection being created due to an impedance mismatch. Normally on long cable runs, echo is attenuated, however when data networks sit between two analogue ends the analogue runs are much shorter which gives less chance for echo to be attenuated.

You can also experience Acoustic Echo when using speakerphones and headsets. This is because the loudspeaker sounds are picked up by the microphone and sent back to the caller. Causes of echo are listed below:. Echo Suppression can be implemented by supressing voice on the return path to prevent the feedback and resulting in half-duplex voice communication where the louder conversant wins. This causes a problem with Modem handshaking, so a tone of frequency Hz is sent by the answering modem in order to turn off the voice suppression.

A more sophisticated method of dealing with Echo is Echo Cancellation which works on the receiving end by synthesising a replica of the echo creating its own codebook and subtracts this from the actual echo. This technique allows full-duplex operation to continue. You may notice initial echo occurring at the beginning of a conversation but then it dies away.

If echo is a problem on both ends then echo cancellation needs to be operating on both ends. CO switches contain D-Channel Banks which convert from analogue voice and signalling to digital voice and signalling. Newer channel banks have appeared giving higher densities. The D2 Channel Bank supports 96 channels for every 72 channels that the D1 supports.

The D3 and D4 support channels. Most recently the Digital Carrier Trunk has been produced which is more manageable being smaller. PBXs use different digital signalling systems depending on manufacturer.

Switch protocols that transport PBX features can be translated when these protocols run across standard signalling systems. A point-to-multipoint topology will require translation. If the trunk is idle then the SF tone is present.

If the trunk is seized then the SF tone represents the dial pulses in bursts of tone. It is not uncommon for non-standard signalling systems to be used as manufacturers aim to gain the edge on available features. Examples include the following:.

If the proprietary signalling uses one CCS signalling D channel e. If the proprietary signalling uses more than one CCS channel e. This is where the D channels are put into a TDM group and are not restricted to channel 16, or channel Voice networks have normally been separated from Data networks and therefore have incurred greater liabilities such as the doubling up the Wide Area Links whilst the equipment and support costs have been high to cater for the separate networks.

Packetising voice provides opportunities to combine some or all of these elements resulting in greater effiency. A number of challenges arise when changing from a circuit-switched voice network to a packet-switched voice network.

These can be summarised as:. These challenges are dealt with in detail in Quality Of Service. Technologies that packetise voice also provide opportunities to expand on the services that are provided by traditional circuit-switched voice systems. Setting up and controlling calls is carried out in a very different way in a packetised voice environment. Call control can be centralised using Call Agents or distributed using voice gateways that can handle calls and make routing decisions.

IP-based protocols such as H. Nyquist discovered that when human speech was being digitised it was important to sample the analogue speech signal at more than twice its highest frequency in order for the reproduced sound to be of reasonable quality. That is, when the digitised signal is decoded at the receiving end, the original sound could be reproduced accurately. Take the following simple sine wave:. If we sample precisely at twice the frequency of the wave e. If we sample at four times the frequency i.

Because human speech has a frequency range of - Hz, the CCITT recommendations are to build circuits to cater for this frequency range.

A band-pass filter is used to isolate this frequency range. There are problems with Signal to Noise Ratio SQR of pure 8-bit encoded signals because the volume amplitude is reduced from the original analogue signal so the PAM signal is then Quantised where an integer code is assigned to each amplitude of each sample.

The integers come from a scale made of 8 divisions called Chords which are more concentrated near the origin where the low level tones are, in a logarithmic way. This means that there is less distortion of the lower tones larger signal to noise ratio and suits the logarithmic nature of the human ear. A linear uniform quantisation would result in poorer sound quality at lower amplitudes. Each chord is split into 16 equally spaced voltage divisions 0 to 7 positive and 0 to 7 negative. These methods apply digital values to analogue signals.

Bell labs developed the U-law method of logarithmic quantisation used in North America and Japan. U-law or 'mew-law' tends to have a lower idle noise than A-law.

The ITU modified this in G. If one end of a trunk uses U-law and the other end uses A-law then the U-law end must make the change to A-law. A-law has slightly better signal-to-noise ratio for low amplitude signals than U-law.

Quantisation Error is the difference between the quantised signal and the original analogue signal. If each integer code is given an 8 bit binary value then 64kbps would be the required bandwidth for digitised voice. This is called DS0. Waveform coders produce a non-linear approximation of the waveform. We have seen one form of voice compression called Pulse Code Modulation PCM which is a Waveform Compression Algorithm that just looks at the waveform irrespective of the voice patterns.

This is called the Quantisation Granularity. Each bit value represents a change from the value of the previous sample, with the assumption that differences are never likely to be more than 4 bits change. Every so often a full marker value is sent rather than just the differences from the previous sample.

The ITU designate this as compression standard G. Using 3 bits per sample is defined in G. There is also a G. The encoding delay is typically less than 1ms which makes ADPCM very attractive, particularly in environments where there is Tandem Switching. A vocoder synthesises the voice. This synthesis results in a voice that lacks in emotion and it is therefore difficult to identify the speaker.

Compression can end up with a stream as low as 2. A hybrid compression form uses Source Compression and takes the voice signals into account when compressing i. Hybrid coding comes under the broad spectrum of Analysis by Synthesis AbS coding where analysis is continually performed on the speech and the algorithm attempts to predict the waveform in the near future around 5ms.

This occurs via a feedback loop and adds a little 5ms delay to the voice path. This can provide high quality voice reproduction at low bit rates. With CELP voice signals are compressed as follows:. A bit codeword is assigned to every block of 5 speech samples.

Four codewords are grouped together into a sub-frame which takes 2. CS-ACELP performs a 5ms look ahead to predict the next wave pattern plus it also reduces noise and does pitch-synthesis filtering. You can combine Annex B with G. The bandwidths used by these algorithms that we have talked about are just the actual data bandwidths and do not take into account the packet headers of the protocols being used to carry the data.

For instance, if you are using G. If the payload increases to say bytes for a G. You can see that the greater the payload, the less bandwidth is required. The default payload size for G. ATM cells have greater overhead because of the reduced size of 53 bytes. This takes up a further 3 bytes leaving 37 bytes for the voice data. If the default G. This table lists the codecs and their respective speeds and bandwidth requirements for given sample sizes.

Human speech uses a bandwidth of Hz to Hz if you include harmonics with most of the speech occurring between Hz and Hz. The more bandwidth that is allocated to cater for human speech the more faithful is the sound to the original, this is called Fidelity.

Human speech quality is also affected by Echo , Delay and Jitter Delay variation. Jitter is often a symptom of voice of data networks. The MOS is a statistical measurement of voice quality based on human opinion of a certain spoken sentence. In English the sentence used is "Nowadays, a chicken leg is a rare dish".

The Ratings are as follows:. The following table gives examples of comparative scores regarding the different types of compression:. A score of 4. These scores are reassessed regularly and change with time. One thing to bear in mind is that delay is not taken into account with the MOS.

The following table gives examples of comparative MOS scores for G. PSQM uses a rating scale of 0 to 6. The test equipment implements PSQM by comparing the transmitted speech to the original input in real time.

This information can be linked to SNMP-based management systems. BT also developed a voice quality measurement algorithm called Perceptual Analysis Measurement System PAMS that is used to predict the effect on voice quality measurement scores when various waveform codecs, languages etc.

A packet is sent called a Silence Indicator SID to notify the other end that the voice activity power level has dropped below a certain threshold e.

VAD requires a 5ms look-ahead buffer so this adds delay to the voice path. There is an issue with VAD in that pure silence is off-putting to the users so techniques are employed to introduce white or pink noise locally to simulate this Background noise. Related to silence is the concept of Sidetone. This just plays the speaker's voice through the earpiece locally so that the speaker does not think that there is a faulty handset.

The Group 3 Fax or the Modem is designed to run on the analogue network even though it operates digitally internally. No silence suppression or compression can be applied and even though the Fax typically just uses 9. This is because the analogue signal is continuous and no silence suppression can be used, nor compression as you cannot lose any of the digital information.

Faxes and Modems use a Hz tone to identify themselves to the switch. The standard analogue Fax protocol is T. Traditionally, fax machines have differed in their facilities offered and T. In addition, their tolerance of packet delays and receive errors is low because fax machines use synchronous modems which has no built in flow control. If a calling fax does not receive a response from a receiving fax within normally 3 seconds, the whole message is transmitted again. Proprietary local spoofing techniques can ease this issue of delay that can be incurred between fax machines over great distances.

This DSP then converts the analogue signal coming from the Fax machine to a digital bit stream. This bitstream is sent within VoIP packets at the speed 9. This saves bandwidth as it compares with the 64kbps bandwidth normally taken up by the Fax call as it is traditionally converted to PCM.

If the delay is large on a path, rather then lose fax relay packets it is a good idea to increase the buffer size to several hundred milliseconds because real time interaction is not important.

These attachments are TIFF files of the faxes themselves. An On-Ramp gateway performs the conversion to E-mail and attachment. Modem Passthrough operates in the same way as Fax Passthrough where just a G.

The remote gateway converts the signals back to analogue and forwards the signal on to the remote modem. When designing a voice network it is necessary to size trunks and equipment ports to suit. The PSTN can give statistics on the number of calls offered, the number of abandoned calls and when all the trunks are busy, these are called Peg counts.

The GoS is a measure of the probability that a call is blocked, for instance one call out of being blocked is given by P. This probability applies to the busiest period of the day. The PSTN can also provide the total amount of traffic carried per trunk. The number of trunks needed for the voice traffic in a particular location is based on peak daily traffic. A carrier will provide the number of calls carried but will not give the number of calls offered i.

Only the local PBX can tell you how many calls were offered and therefore how many calls failed. If the voice traffic is to run over a data network, you also have to take into account the statistics provided by SNMP management stations, network analysers and router interface statistics. You need to ensure that data delay and throughput is not impaired as well as the GoS for the voice traffic.

If the data peak demand occurs at similar times throughout the day to the voice peak demands, then this has to be taken into account when designing the voice network. The offered traffic load A is made up of the product of the number of originated calls in an hour C and the average holding time for a call T i. The average holding time is not just the average time that a call takes but includes the call set up and tear down as well as incomplete calls.

Quite often billing records round up the duration of a call to the next minute rather than the nearest minute. This means that they are overstated by an average of 30 seconds each call. For traffic calculations, if you are using the billing records you need to factor in a reduction by multiplying the number of calls by 0.

The concept of the Busy Hour is used to represent the number of call attempts during the busiest hour that the organisation experiences on its telephone network. If you have access to the CDR records then to work out the busiest hour, take the 10 busiest days in a year, sum the traffic on an hourly basis, find the busiest hour and work out the average amount of time a call takes average duration.

The next thing to calculate is the amount of traffic a trunk can handle in an hour, normally we calculate this for the Busy Hour. This traffic volume measurement is measured in Erlangs , a measurement which is dimensionless. For example, if each user in an organisation of , makes 12 calls in the busy hour with an average duration of 6 minutes per call, then the offered traffic load A is given by C x T which is 12 x x 6 giving minutes. An Erlang is sometimes equated to 60 call minutes call seconds or 36 centum call seconds, CCS.

When traffic engineering your aim is to maintain or exceed the GoS. To do this you need to work out how many trunks you will need now that you know the erlangs in a busy hour. This requires a look at three areas:. The complexity of traffic engineering necessitates the use of erlang tables or calculators, to work out the number of trunks required given that you know the volume of traffic in erlangs and you know the target GoS.

The most common table used is Erlang B which uses the Poisson Distribution, based on infinite resources and uses LCC for lost call assumptions. When you have multiple sites and multiple trunks between those sites, it is often necessary to create a Call Density Matrix that has branch-to-branch and branch-to-HQ entries for the busy hour call minutes.

You can use this matrix to work out the erlangs on a site-to-site basis. When calculating trunk sizes for a VoIP network you need to find out how much data bandwidth each call will take.

This will depend on the codec and sample size being used. The earlier table gives an idea of bandwidth used on a per call basis. Multiplying the appropriate bandwidth by the number of calls allows you to work out the trunk size. An Erlang is continuous use of one trunk, designed around the busy hour. If we JUST look at this however, the most of the time the system is over specified, therefore aim to have a percentage of the calls blocked.

It is therefore a good idea to add a little extra when sizing bandwidth requirements. VoFR allows you to run voice and data over the same WAN infrastructure which has management and cost benefits, plus the frame header overhead is low.

In order for voice to run over Frame Relay, fragmentation of the data frames needs to occur to allow steady voice traffic. This fragmentation can be a proprietary format, end-to-end FRF. The fragmentation header is omitted on frames less than the fragment size so just the largest frames those larger than the fragmentation threshold are fragmented.

If you want to centrally control billing and administration then you can set up a hub-spoke Frame Relay WAN where the central HQ is the hub and tandem switching occurs for calls between spoke sites:. When using the WAN links we need to convert to a more efficient codec, in this example we have used G. This gains us the benefit of bandwidth savings. There is a problem however.

Take the example where a call is made from site B to site C:. This is called Tromboning where several compressions and decompressions occur within one call. This then adds delay and deteriorates the quality of the call. If the routers have the ability to operate dial plans, then routing of calls based on the dialled number could be carried out at the router. Tandem switching could therefore be eliminated altogether since the Frame Relay cloud ends up acting as a large virtual voice switch.

If you were running G. Performing the same calculation for G. AAL5 is frequently used for data due to the fact that all 48 bytes are available for the payload. If we took a typical 20ms sample of voice and encoded it in G. Because of the fixed cell size of ATM the remaining 28 bytes of the payload would be padded out. This would mean that for every 20ms sample there would be 20 bytes of data and 28 bytes of overhead.

This could be considered inefficient because of the 28 bytes of padding. Provided that the delay budget allows it, you could increase the sample size to say 30ms or more to reduce the wasted bandwidth from the padding bytes. Even so, Frame Relay is more efficient. For good quality voice it is good to stick to 20ms samples 50 packets per second. There is no internal echo cancellation so this may have to be added externally. With multiple sites attaching to an HQ you would need to run the hub site PBX as a tandem switch because there is no opportunity for routers to re-route calls based on dialled digits.

A voice channel fills the whole payload of the cell. This is good for equipment that uses proprietary framing. TDM devices can then be removed. This makes it unsuitable for voice traffic. VoIP is fast becoming the data method for voice packet transport. IP is more flexible than either ATM or Frame Relay, not only because of the quicker re-routing and resilient capabilities, but also because of the extra features that can be bolted on to the IP environment to exponentially increase the number of applications that the VoIP environment can utilise.

VoIP has some quality issues that are different from traditional voice, these include Jitter, packet loss and queuing problems when small voice packets compete with large data packets. Thes issues are dealt with in detail in Quality of Service. TCP is used for the H. This diagram illustrates the RTP header:. The RTP header is 12 bytes in length not including the Contributing Source stream list which could add another 60 bytes made up of 4 bytes x 15 and follows the 8-byte UDP header and the byte IP header.

RTP has the ability to identify the payload and timestamp the packets, plus it sequences the packets and monitors the packet delivery, re-ordering them if necessary. It also carries a canonical name which is an identifier of the source of the RTP stream.

This is used by the transport layer at the receiving end in order to synchronise audio with video. The RTCP information includes jitter, delay and packet loss as well as packet counts. A one-way telephone conversation e. VoFR and VoATM are fine for simple point-to-point topologies but for Voice over data to be a serious contender to traditional voice systems there needs to be a scalable way of building these topologies and communicating within them and this is where VoIP comes in.

One required element is a Gateway that connects and translates between a traditional analogue telephony system and an IP-based telephony system. The gateway also is required if you are using more than one IP-based call control system, as you need to translate between them. The call control system is a vital element to the VoIP environment and controls how calls are managed within the IP network. The control signalling is handled separately from the actual voice streams. Umbrella call control systems include H.

There is a need to monitor the resources used by each call and to maintain a database of the call records. This provides the ability then to control who is allowed to call and what resources they are allowed to use. Call control gives you the ability to route a call based on the dialled number, this therefore requires a way of registering and resolving addresses numbers.

Using the call control system in an IP environment you can decide whether to administer the calls from a centralised point or in a distributed way.

An example of bandwidth management is when a G. The bandwidth requirement changes. The Gatekeeper gives scalability to a VoIP design and can rival the traditional telephony topology. In a large VoIP telephone network it is impractical to configure dial peers for every single phone so the idea of a H. Gatekeepers translate phone numbers E. The following diagram illustrates the sequence of events the H. Each gateway has a dial peer configured to point to their own Gatekeeper rather than have lots of dial peers one for each phone number.

This is analogous to the IP default gateway. Take the worst case scenario where no devices know about each other, using the numbered arrows, the sequence of events when phone A wants to call Phone B operates as follows:.

Databases can be localised to zones rather than have setup traffic all over the Wide area to just one database. Rereferencing the zones, or routing to these zones is done via the area code e. A Supergatekeeper or Directory Gatekeeper can be configured that only knows the area codes rather than the individual phones numbers.

This hierarchical arrangement is similar in nature to DNS. These gateways set up TCP H. Because there are a number of transactions going on within the H. When a Gateway intitiates a Call setup with another Gateway using H. Because of the critical nature of the Gateway and Gatekeeper, there are methods in design that provide resilience.

Only one is active, the other standby in case of failure. Flows are momentarily disrupted on a failure as the failover is not stateful. A Gateway can be set up with multiple Gatekeepers from which it can pick one to use in case one has failed, or it can multicast out in order to find a Gatekeeper. Gatekeepers send each other location requests when trying to find endpoints.

If more than one Gatekeeper is configured for a particular prefix, then any one of these Gatekeepers can respond. Similarly, multiple Gateways can also be configured with the same prefix. An additional element is the prepending of the Technology Prefix to the dialled number.

This may be done by the gateway or the gatekeeper. Either way the gatekeeper checks the prefix and examines its technology prefix table to see which gateway s are registered with that prefix.

The prefix identifies the capabilities of the gateway and therefore that which the call requires. The ITU have defined technology prefix characters, some of which are as follows:. Conferences where more than two users communicate, can take a number of forms. The Centralised Conference is where the endpoints have their data, audio and video channels connected to the MP. In a Decentralised Conference , the endpoints multicast the data, audio and video streams to each other rather than be connected to a central MP.

This means that the same codecs must be used. An Ad-hoc Conference is where two endpoints in a call decide to convert their call in to a point-to-point conference and invite others to join them. They either use an MC that is near by or a Gatekeeper.

In order to do this the firewall has to keep track of the flows and has to rid the allowable ports from its table when the respective flows have finished. You may have the situation where you may wish to provide network security for the IP telephony endpoints such that remote endpoints are unable to see the local endpoints. The Proxy server not only can act on behalf of the Gatekeeper, it can also act on behalf of an endpoint. When a local endpoint wishes to reach a remote endpoint, communication occurs between the local endpoint and its local Gatekeeper.

The local Gatekeeper finds the remote Gatekeeper who refers the local Gatekeeper to the remote Proxy. The local Gatekeeper tells the lcoal endpoint that it needs to talk to the local Proxy. Both local and remote Proxies talk and use their respective gatekeepers as they complete the call between the local and remote endpoints. SIP is used to provide signalling and control which establishes, maintains and terminates multimedia sessions.

Using text-based protocols makes SIP easier to troubleshoot. SIP supports Intelligent Network IN telephony subscriber services such as name mapping, redirection and personal mobility.

Error codes

Leave a Reply