PVC's Digital Tools for Digital Tyrants:

Mel Tools


This Maya tool is for creating footprints and related dirt, dust, and foot droppings based on the movement of foot objects relative to a ground object. Emitters and particle objects are created for dirt, dust, and droppings emissions and footprints are made by duplicating previously modeled objects, placing them at contacts points with the ground, and animating their visibility so they appear as the foot strikes the ground. Footprints may optionally be given a "begin" shape and so that they animate as they apprear via a blend shape operation. This system does not do precise contact calculations and the results are intended only as a starting point for artists doing these effects.

When the script is first run the following interface window appears:

jceFootprint main window

At the very top of the page is a text entry field for a Base Name which is a prefix that is used when creating or deleting objects. A different Base Name could be used for each creature in the scene.

The four tabs in the window below reflect the different stages of footprint creation.

The first page: Make and Bake is used to create and initialize emitter and particle objects. The Emission Objects text field is used to specify objects in the Maya scene which are to be used as dust and dirt emission sources either either as surface emitters or as the location for point emitters depending on the state of the Use Surface Emitters option switch. Pressing the Sel button to the right of the field will load the names of all selected objects. There is also a Create Droppings Emitters option will cause the creation of emitters and particles for droppings in addition to dirt and dust. "Droppings" are particles which are emitted as feet rise off the ground.

Pressing the Create button will cause the creation of emission objects for the specified objects. If particle objects of the appropreate name with the given Base Name prefix already exist then the emission objects will be connected to their corresponding particle objects. If the these particle objects do not exist, they will be created and initialized complete with standard color and opacity ramps and dynamic expressions. Particle attributes named radiusPP, radiusPPInit, rmanFradiusPP, and rmanFparticleId will be created and initialized in a creation rule. The first two are for controlling the particle size while the latter two are for passing particle attribute on to the Renderman renderer via Pixar's MTOR interface.

Also created, if it does not yet exist, is an object set with the Base Name prefix to which emitter objects are added. In furture operations this set will be used to identify created emitter objects.

The next step is to bake in height and velocity curves for the emitters. This is done by entering the name of a ground object into the Ground Object text field and pressing the Bake button. After the button is pressed, the entire scene will be animated from the start to end frames and the position of each emitter object along with its distance from the ground object will be recored as curves in attributes that are attached to the emitters. Seven such attributes will be created: PosX, PosY, PosZ, Dist_0, Dist_1, Dist_2, and SpeedN.

The attributes with the Pos prefix record the position of the emitter, SpeedN is the speed, and Dist_0 through Dist_2 is the distance of the emitter from the ground plane along with its associated first and second dirivatives (i.e. velocity and acceleration values). As the scene is animated all objects in the scene will be made invisible to speed the playback process.

Now that emitter and particle objects have been created, it is time to move to the calculation phase. This is where emission rate curves will be created using the baked height and velocity curves. Pressing the Calculate tab button brings forward the calculate controls as shown below:

jceFootprint calculate window

Parameters used to calculate emissions rates are as follows:

Foot droppings emissions have the associated parameters:

These curves should provide the animator with a good starting point for controlling particle emissions.

The next step is to lay down tracks which are actual physical footprint objects at ground contact points which become visible as the feet hit the ground. Since these impressions usually extend below ground level, they are usually rendered independently from the rest of the ground and overlayed in the composite. Selecting the tracks tab brings forward the following controls:

jceFootprint tracks window

To start with, one or more foottrack objects need to be created and placed under a group node. The name of the group node is entered in the Tracks Template Group field either by directly typing in the name or by selecting the group node and pressing the Sel button. When the Create button is pressed the previously baked emitter distance-from-ground curve dist_0 is used to determin ground strike positions and entries from this template group are randomly selected, duplicated, and placed at the strike point with the visibility attributed animated to make it appear under the foot at the proper time. As with dust and dirt emissions, this in only intended as a starting point for artists as placement and timing will probably need to be manually adjusted.

There is a Blend Animation option that can be used to make the footprint geometry grow for a frame or two after it becomes visible. When the this option is selected, the first member of the template group is used as the starting shape while another member of the template group is used as the final shape.

Often it is desirable to have seperate track templates for left and right feet since they are shaped differently. If the right and left geometery is identified with a unique suffix at the end of their name such as _r and _l then a seperate track template group can be used for each foot. To do so the template group nodes must have identical suffixes in their name and the Use Emitter Suffix to Select Template Group option must be selected. (Emitter created with this tool will always be named such that they share the same underscore delimited suffix as the geometry that was used to create them.)


This script can turn a polygonal plane that has been split into sections into a goal driven softbody that will form cracks as forces are applied.


To use, create a polygonal plane with a command such as the following:

 polyPlane -w 50 -h 50 -sx 1 -sy 1 -ax 0 1 0 -tx 1 -ch 0;

Then use the poly split tool (with Snap to Edge and Snap to Magnets turned off) to split up the plane and consequently define cracks. The next step is to run the met_jcePolyCrackUp script. This can be done by picking the menu item:

 JCE > FX > Polygonal Crack Up ...

Running the script with the plane selected causes the creation of two top level groups which by default are named polyCrackUp and polyCrackUpSoft. The first group holds two invsible copies of the polygonal plane which act as goal objects for the softbody held under the second group. This second group also holds the particle object which controls the softbody.

Cracking is driven by a run-time-after-dynamics expression which shrinks down the faces of the polygonal object when it detects a downward force. This shrinking of the faces causes cracks to open up. The secret behind this technique is that one of the two invisible goal objects is the same of the original polygonal object while the second is a copy that has had all of its faces shrunken down.

The particle opject has several attached attributes which effect the evaluation of the particle expression. Some of these are:

Maya Plug-ins

metParticleLink Node

The plugin can be used to control Maya object translation and rotation attribute through particles. The translation values will track particle positions and rotations will be controlable through particle expressions via attributes named twistPP and twistAxisPP. To facilitate the creation of a metParticleLink scene node with attachments for controlling selected objects, the JCE FX menu has the entry Link Objects to Particles .... This brings up the following window:

Image:Paulv particleLinkUI.jpg

The two buttons on this interface offer two methods of creating a metParticleLink node: the first is for attaching objects under a selected group node to an existing particle object while the second creates a new particle object which will control the positions and rotations of selected objects. The first method preserves current particle positions and will create new instances of the geometry under the selected group node so that all existing particles have geometery to control. The second preserves current object positions by creating a particle at each object's current position. Forces which move the particles will move their connected objects as well.


The footprint tool discussed at the top of this page uses a simple method of footprint generation where copies of foot imprint geometery are placed under the foot contact positions and and made visible as feet strike the ground. While this approach is simple and requires little overhead, it is only useful when the prints are not to close to camera. A more realistic looking approach is to have the ground surface deform with actual contact from geometry. To facilitate this, the metImprintNode node plug-in works with a polygonal patch that conforms to match the ground surface shape and uv assignments and then deforms as connected foot geometry press against it. For increased realism an outer rim can be may to form on the ridge of print as would happen with mud as matterial is squeazed outward. Using patches placed at foot impact locations is much more efficient than modifing the orginal ground geometry since it limits the number of collision detection calculations required and it allows high resolution geometric detail to be use only where where needed.

To use the plug-in, select the impacting foot geometry and then the ground geometery and run the MEL command metImprint. This will create a groupNode that has below it two polygonal mesh subnodes. One is the reference geometry which is a polyPlane generated grid and the other is the node which will hold the generated geometry. Do not worry if the generated geometry does not appear right away as the node is only active durring a specified frame range.

Note that the reference geometery does not have to be a grid and can be replaced by any mesh object but usually the polyPlane grid works out just fine.


A metImprintNode has the following control parameters:

To reduce the computation load on Maya set the start and end times to match the actual collision period for each footprint and then bake the impact deformations into the geometry.


The Maya API includes methods for querying the intersection of a ray with a polygonal mesh object or NURBS objects. In addition it will give the UV coordinates of that intersection point for NURBS geometry (but not for the mesh). This plugin give MEL scripts access to that functionality and in addition to optionally computing UV coordinates for mesh intersection point. While these functions may be slow they can quite useful.

Shaders and Slim Templates




This for the most part is a classic cloud shader -- very much like the RAT smokeNfire shader. Unlike smokeNfire however, it offers fall-off with distance from camera -- which is important since dense regions can become brighter due to surface color accumulation. -- and it supports a Darken Below parameter to give the impression of self shadowing. It also has an additional grain pattern which is added to the lump noise for additional pattern complexity. To get different patterns on each particle, export the particleId parameter from Maya by creating a particle attribute named rmanFparticleId and use a creation rule to set it to equal the actual particle Id. To get the pattern to scale with the radius of each particle, create a particle attribute in Maya named rmanFradiusPP and use expressions to set it equal the actual radiusPP attribute. Specular highlight values are output with an AOV variable named _specular which can be used to render a secondary image.

In addition to the cloud shader there is a similar Slim template float function which has the following UI:


This function can be found under the Slim menu as File> Create Appearance> Meteor> Float> Pattern> metCloud. Since it is a shader template rather than a complete shader it can be integrated in with other Slim templates to achieve a wide range of effects but its primary usefullness is for rendering spherical particles objects. This is because it is a noise pattern that can read and react to the imported per particle attributes: rmanFparticleId and rmanFradiusPP.


This slim template float function can be used to as a displacement value to create images of cracked mud. It can be found under the slim menu:

 Create Appreance> Meteor> Float> Pattern> metMudCracks



Rib Generators



While Maya fluids can be used to create some rather stunning images, the rendering time can be quite long and lack of support for motion blur can often be a problem. Also it may not be usable when the scene includes models that utilize renderman displacements. For these reasons and more, a tool was created for utilizing Maya fluid data as the bases for volumetric renders within Renderman. The visual results differ from Maya's but sometimes even that can be a good thing.

The transfer of fluid data from Maya to Renderman is done thorugh a MTOR RIB Export plug-in. RIB exporters are evoked by MTOR during the creation of the scene descritption file used by Renderman (called a RIB File) for each frame that is to be rendered. They query Maya for state information, which in this case means the fluid data for the frame, and that information to either create RIB geometry which has a special shader assignment or to include a special atomosphere shader. The mtorFluidExport UI looks as follows:


This interface has a lot of controls, most of which have associated help pop-ups which appear when the info icon next to a parameter is clicked. Since help information is availible for individual parameters, only general concepts will be discussed here.

Since RIB generators can not be directly attached to fluid objects, the RIB Generator must be named such that it has a world_ prefix and the object to be rendered must be identified with the Fluid Object parameter.

Two Basic Techniques: 3D Geometry with a Surface Shader vs. Global Atmospher Shader

Fluid based data can be included into the scene as either 3D geometry with an surface shader or as a global atmosphere shader. The technique to be used is selected with the GeoType parameter. The advantage of using 3D geometry is that there can be multiple fluid objects in a scene whereas there can be only one global atmosphere shader. A big disadvantage to the geometery approach is that the camera's front clipping plane can never cross into the geometry or else the fluid will disappear since the surface shader will not be evaluated if it is not visible. Also ray tracing must be turned on with geometry in order which can slow down rendering when the scene is complicated.

Another difference between the two approaches occurs with motion blur and sample rate. Ordinarily objects which are closer to camers need to be sampled with higher resolution and end up recieving more motion blur due to camera motion. However Renderman treats volumetric shaders as if they were a surface color immediately in front of or behind an associated surface so their sample rate and motion blur are the same as the surface. In the case of 3D geometery, the interior volumetric shader is treated as a color immediately behind the surface while with an atomosphere shader it treated as if it were immmediately in front of its surface. Using 3D geometry gives the user control over the distance from camera used for these evaluations where as an atmosphere shader can only be evaluated on the surface furthest from camera.

Renderman Shading Strategy

The Renderman approach to shader evaluation does not always work well for volumetric effects. To overcome some technical issues, the people at Pixar created an alternative "shader stradegy" which they call VPVolumes. To set the default shader stradegy for surfaces, create a ribbox named world_Ribbox and place the following line within it:

  Attribute "shade" "string strategy" \["vpvolumes"\] "float volumeintersectionpriority" \[0\]

When using the VPVolumes shader stradegy, there is no need to attach the "matte" attribute to surfaces since the surfaces will automatically be treated as matte surfaces (meaning they are opague but have no alpha value.)

Exporting Fluid Data

Fluid data can be exported either as part of the RIB generation process or prior to the RIB generation process. Either way, the fluid_data_export plug-in must be loaded. Exporting fluid data as part of the RIB generations saves the user from doing it as an extra step but it may not work if RIBs are to be generated remotely due to limitations with Maya fluid licence availibility of plug-in availibility on remote machines. The export parameters in the shaders are shown below:


ExportMode can be set to Do Not Use, Non-Animated, or Animated. If Do Not Use is selected then fluid data will not be used and the resulting volumetric pattern rendered will be the result of fractal settings alone. This is equivalent to having fluid with a density of "one" everywhere. If Non-Animated is selected then the fluid data exported for a single frame will be used for all frames. The frame used is specified with the Export Frame parameter. This data must be exported prior to the RIB generation of any other frame or else a file not found error message will be generated. If the Animated mode is selected then a sperate fluid data file will be used on each frame.

The No Export parameter specifies whether fluid data is to be exported along with RIB generation or whether it must have been exported previously. There is an export tool that can be invoked with the Maya menu: JCE> FX> Export Fluid Data .... It has the following interface:


Adding Fractal Based Texture Detail


Fractal based 3D texture patterns can provide aestheticly interesting detail with relatively low processing requirements as compared to fluid simulations. For scenes with limited or no fluid movements this texture addition can radically reduce the amount of computer memory and processing power needed. In fact it can even replace fluid data altogether for general atmosphere.

Choosing Step Size and Data Cache Settings


Volumetric patterns are generated by stepping through space along the line of sight from the camera and collecting density and color information. Shadow information is aquired by stepping through space from each volumeteric sample toward each light source and collecting density information along that path as well. (For expediency's sake only density information is collected for shadow calculations and not color absorpstion values.) Choosing a proper VolumeStep parameter value is critical to efficient rendering. A value that is too large will result in flicker flucuations as the camera moves and values that are too small will require excessively long render times. In theory step size should be relative to pixel size which means that it can increase with distance from camera and as a result there is a VolumeStepDepthAdj parameter which when set to equal one which increase step size with in proportion to the distance from the camera. In practice only testing and more testing has proven effective in finding the idea step size and the VolumeStepDepthAdj parameter tends to cause artifacts when the camera is moving quickly even when it has a small value.

The VolumeStep parameter is in units of the original fluid grid object.

Value caching is a method of speeding rendering times and eliminating flickering due to aliasing as the camera moves. The CacheRatio is in units of the orginal fluid grid so sampled density and shadow values for a given point in space are independent of camera position which can help in the elimination of aliasing. If flickering occurs either reduce the CacheRatio to filter out high frequency information or decrease the step size to increase the sampling frequency.

Depth Cue Parameters


Depth cueing distance from camera can do a lot to add realism to an image. The DepthCueRegions parameter sets front, middle, and rear depth-cue regions. When fade mode is selected, opacity will be ramped up in the front region and brightness ramped down in the rear region. If BGR mode is selected then the default color for the regions will become Blue, Green, and Red respectively. This will enable a compositor to seperate near, middle and far regions. A third choice, fad up + BGR is the same as BGR except that the first region has it opacity ramped up in addition to be colored Blue.

Units are in World Space.

Color Parameters