VDS Setup

Wouldn’t it be cool to create a setup for your own Data Standard customizations? Yes, a setup with a version, secure to install and easy to deploy, and simple to upgrade? It’s easier than you think. For this purpose, we use the open source WIX toolset. We need a batch-file like this

"c:\Program Files (x86)\WiX Toolset v3.11\bin\heat.exe" dir ..\VDS -cg VDSGroup -out VDSGroup.wxs -gg -srd -dr DataStandardFolder
"c:\Program Files (x86)\WiX Toolset v3.11\bin\candle.exe" VDSGroup.wxs
"c:\Program Files (x86)\WiX Toolset v3.11\bin\candle.exe" Customer1.wxs -dPVersion=1.0.1
"c:\Program Files (x86)\WiX Toolset v3.11\bin\light.exe" Customer1.wixobj VDSGroup.wixobj -out Customer1.msi -b ..\VDS
del VDSGroup.wxs
del *.wix*
pause

and a master XML instruction file like this:

<Wix xmlns='http://schemas.microsoft.com/wix/2006/wi'>
  <?define UpgradeCode="041e09fb-6cde-431f-8955-d9a019cb8a35"?>
  <?define ComponentGUID="041e09fb-6cde-431f-8955-d9a019cb8a35"?>
  <Product Name='Customer1 VDS Setup' Manufacturer='coolOrange' Id='*' UpgradeCode='$(var.UpgradeCode)' Version='$(var.PVersion)' Language='1033' Codepage='1252'>
    <Package Id='*' Description="Customer1 VDS Installer" Manufacturer='coolOrange' Keywords='Installer' InstallerVersion='100' Languages='1033' Compressed='yes' SummaryCodepage='1252'/>
    <Upgrade Id='$(var.UpgradeCode)'>
      <UpgradeVersion OnlyDetect='no' Property='PREVIOUSFOUND' Minimum='1.0.0' IncludeMinimum='yes' Maximum='$(var.PVersion)' IncludeMaximum='no'/>
    </Upgrade>
    <Media Id='1' Cabinet='Sample.cab' EmbedCab='yes' DiskPrompt='CD-ROM #1'/>
    <Property Id='DiskPrompt' Value="Customer VDS Installation [1]"/>
    <Directory Id='TARGETDIR' Name='SourceDir'>
      <Directory Id='CommonAppDataFolder' Name='ProgramData'>
        <Directory Id='AutodeskFolder' Name='Autodesk'>
          <Directory Id='VaultFolder' Name='Vault 2017'>
            <Directory Id='ExtensionFolder' Name='Extensions'>
              <Directory Id='DataStandardFolder' Name='DataStandard'>
                <Component Id='DataStandardFolderComponent' Guid='$(var.ComponentGUID)'>
                  <CreateFolder/>
                </Component>
              </Directory>
            </Directory>
          </Directory>
        </Directory>
      </Directory>
    </Directory>
    <Feature Id='Complete' Level='1'>
      <ComponentGroupRef Id='VDSGroup'/>
      <ComponentRef Id='DataStandardFolderComponent'/>
    </Feature>
    <InstallExecuteSequence>
      <RemoveExistingProducts Before="InstallInitialize"/>
    </InstallExecuteSequence>
  </Product>
</Wix>

Ok. Let’s make a step back. You diligently store your Data Standard customizations into a dedicated folder, right? That folder has the same structure as the Data Standard folder structure, something like this

Therefore, you could just take what you have in your clean customization folder, and compile it into a simple setup. For this purpose, we use 3 tools from the WIX toolset: heat.exe for gathering the folders/files information, candle.exe for compiling the information into an object (binary) file, and light.exe for linking the object files and the real files into a msi setup.

In my example, I have under Customer1, a Setup and the VDS folder. The VDS folder contains the sub-folders with the according VDS files. The setup folder, contains the setup files.

As you can see, we have the make.bat (see the first code block above) and the Customer1.wxs (see xml code above). Now, if you install the WIX toolset and create the same structure as in this example, you can run the make.bat, and you will get a customer1.msi that you can install.

OK, let’s add some more info here. Let’s start with the batch-file that we call make.bat. As you can see in the code, the batch file first collects all the folders and files within the VDS folder via the tool heat.exe

heat.exe" dir ..\VDS -cg VDSGroup -out VDSGroup.wxs -gg -srd -dr DataStandardFolder

The first argument dir should be clear. It’s where the heat.exe should start collecting information. In our case, it’s one folder up (..\) and then in the VDS folder. The second argument will give a kind of title to the collected information. We will call it VDSGroup. The result of the gathering shall be saved into the file VDSGroup.wxs. The last argument is a placeholder we will use later.

The next line is

candle.exe" VDSGroup.wxs

which compiles the information generated by the heat.exe in the line before. The next one is again a candle.exe, but with our master file, which we will talk about in a minute.

candle.exe" Customer1.wxs -dPVersion=1.0.1

What you see here, is that we pass the version for our setup as an argument. So, just increase the version, and you can update your previous setup. The pervious version will be removed and replaced with the new one.

Finally, we have

light.exe" Customer1.wixobj VDSGroup.wixobj -out Customer1.msi -b ..\VDS

which takes the compiled Customer1.wixobj and VDSGroup.wixobj files and link them together into a msi setup. The light.exe also takes the original files into the setup, therefore, we need to tell him where the files are (-b ..\VDS), like we did for the heat.exe in the first step.

Let’s now have a look to the Customer1.wxs, you have the XML code above. In this file, we have information about the setup name and the author, so, just go ahead and change the Customer1 with your customer or project name, and replace coolOrange with your company name. In the very first two lines, you’ll see

<?define UpgradeCode="041e09fb-6cde-431f-8955-d9a019cb8a35"?>
<?define ComponentGUID="041e09fb-6cde-431f-8955-d9a019cb8a35"?>

The sequence of number here is called GUID and identifies uniquely this setup for future updates. You need your own GUID, so please go to http://www.guidgenerator.com and generate a new GUID for your setup. DO NOT USE THESE GUIDs!!!!!!!! Have you seen all the exclamation marks?? DON’T USE THESE GUIDs! MAKE YOUR OWN, and replace the one in the file with yours. Get a new GUID for each new project-setup you will make.

I’m skipping most of the elements, as they are good this way, so let’s jump to the Directory elements.

<Directory Id='TARGETDIR' Name='SourceDir'>
  <Directory Id='CommonAppDataFolder' Name='ProgramData'>
    <Directory Id='AutodeskFolder' Name='Autodesk'>
      <Directory Id='VaultFolder' Name='Vault 2017'>
        <Directory Id='ExtensionFolder' Name='Extensions'>
          <Directory Id='DataStandardFolder' Name='DataStandard'>
            ......

This section will define the folder structure where the setup will install the files. Each Directory entry must have his unique ID. The second Directory element points to the ProgramData folder, and all the following Directory entries will be the physical sub-folders. Note the almost last Directory entry? It’s called DataStandardFolder. If you remember, in the batch-file, the last argument of the heat.exe is also DataStandardFolder. This way, the data collected by heat, will be added to our folder structure at this point. So, the sub-folder collected by heat.exe will become the sub-folders of our folder structure in the Customer1.wxs.

So, this Customer1.wxs is already quite good for covering the needs in order to create a VDS setup. Just check that your folder structure points to the appropriate Vault version (2016, 2017, 2018, etc.), set your GUIDs, change the description and you are good to go!

Every time you like to create a new version of the setup, just increase the version in the batch-file, and run the make.bat. You will have a new setup, with according version number. Make some tries. You will see how by installing a new version of the setup, old files will be removed and new files installed.

WIX is highly capable, so if you like to create more fancy setups, you can walk though the tutorial posted here. This blog post here covers just the basic, but you can go further. For example, we create client side setups that contains VDS files, powerEvents files, and other client side stuff. And we also create setups for the powerJobs, just applying the same logic. In the case where you have different sources that you like to combine into one setup, then you can have the heat.exe executed several times, compile the wxs files via candle, and then combine them all into the light.exe, by passing all the object files and adding all the source folders via the –b switch (-b source1 –b source2 –b source3). Good luck!

 

Posted in Data Standard | 2 Comments

Tracing the Vault API

Picasso ones said, „Good Artists Copy; Great Artists Steal”. You like to write some Vault API code, in .Net or PowerShell, and are fighting with the right sequence of commands and appropriate arguments? Why not click the sequence in the Vault client and copy the code generated by the Vault client?

The Vault client communicates via HTTP (Web Services – SOAP) with the Vault server. Tools like Fiddler can monitor the HTTP traffic, so that you can spy what the Vault client is talking to the Vault server and learn from that. Doug Redmond wrote years ago this blog. However, the SOAP communication is quite verbose, and because is not plain XML (it’s multipart), the Fiddler cannot show you the content in a nice, readable way.

Good news, Fiddler can be extended. We did this and created the vapiTrace (Vault API trace)

Just copy the vapiTrace extension to the C:\Program Files (x86)\Fiddler2\Inspectors folder. Start Filler, and you should see the vapiTrace tabs. Now, when you login into Vault (don’t use localhost), you’ll start seeing the client-server traffic and by clicking on the single communication lines, the vapiTrace tabs will show the request and the response in a simpler and readable way.

Let’s say you want to know how to create a new Vault folder via the API. Just go in Vault and create the folder manually and then pick in Fiddler the according command, and see which Vault API have been called, with which arguments. The best is that you can now delete your folder in Vault, and try to create it via API by using the same function with same arguments. Just keep in mind, that only arguments with values are transferred. So, it might be that your Vault API function requires 5 arguments, but in Fiddler you just see 3. This means that the two missing arguments will be null.

Now, Fiddler will trace any HTTP connection, therefore you will have a lot of noise. Activate the filter by going on the Filters tab, and set the filter that shows only URLs that contains “AutodeskDM”.

So, now you can experiment with the Vault API by simulating your action in Vault by hand first, and look into the Fiddler tracing how Vault is doing it, and you just have to do the same.

Posted in Vault API | 1 Comment

CAD BOM via powerVault

2017-03-03_09-05-19Some versions ago, the CAD BOM tab have been added to Vault Data Standard. Great stuff, as it gives you the ability to view the CAD BOM without the need of items. So, it’s great for Vault Workgroup customers and for Vault Professional. The tab reads the data from the so-called file BOM blob. It’s a property at the file that contains all the CAD BOM information. Such property is automatically filled when you check-in a CAD file. You can read the content of such property via the Vault API DocumentService.GetBOMByFileId. The object provided by this API is quite complex. It contains the structured BOM, the parts only, information about purchased part, phantom assemblies, external references, virtual components, and all the other beauty of a CAD BOM.

The implementation of the CAD BOM tab is just handling basic stuff. If the BOM contains data beyond simple components, the result is either wrong or empty. We spent quite some time over the past years interpreting the BOM blob object in Vault, and implemented the logic within powerVault in the Get-VaultFileBOM command-let. Such command-let gives you in a super simple way the correct structured CAD BOM. As powerVault is free, why not use it for showing a correct and complete CAD BOM in Vault?

With the Data Standard CAD BOM you’ll get two files, the FileBOM.ps1 (addinVault folder) and the CAD BOM.XAML (Configuration/File folder). To make it simple, we will replace the content of such files. For security reasons, it’s better you make a copy before. Best is if you copy the file to a separate folder in order to prevent conflicts with other PS1 files.

Let’s start with the PowerShell script FileBOM.ps1. The new content looks like this:

function GetFileBOM($fileID)
{
  $file = Get-VaultFile -FileId $fileID
  $bom = Get-VaultFileBOM -File $file._FullPath -GetChildrenBy ExactVersion #ExactVersion,LatestVersion,LatestReleasedVersion,LatestReleasedVersionOfRevision
  $dsWindow.FindName("bomList").DataContext = $bom
  return $bom
}

Pretty simple. With Get-VaultFile we get a complete file object. With Get-VaultFileBOM we get the BOM. The interesting thing about the Get-VaultFileBOM is the parameter GetChildrenBy, which allows you to define whether you like to see the BOM as it has been checked in the last time, or see how the BOM could be by using the latest version, latest released version or latest released from current version. More details about these options can be found here.

The XAML file CAD BOM.xaml is also pretty simple

<?xml version="1.0" encoding="utf-8"?>
<UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Name="MainWindow" xmlns:WPF="clr-namespace:CreateObject.WPF;assembly=CreateObject">

<DataGrid Name="bomList" AutoGenerateColumns="False" IsReadOnly="True" ColumnWidth="Auto" HorizontalGridLinesBrush="WhiteSmoke" VerticalGridLinesBrush="WhiteSmoke">
  <DataGrid.Columns>
    <DataGridTemplateColumn>
      <DataGridTemplateColumn.CellTemplate>
        <DataTemplate>
          <Image Source="{Binding Thumbnail.Image}" Height="35"/>
        </DataTemplate>
      </DataGridTemplateColumn.CellTemplate>
    </DataGridTemplateColumn>
    <DataGridTextColumn Header="Position" Binding="{Binding Bom_PositionNumber}"/>
    <DataGridTextColumn Header="Part Number" Binding="{Binding _PartNumber}"/>
    <DataGridTextColumn Header="Quantity" Binding="{Binding Bom_Quantity}"/>
    <DataGridTextColumn Header="Unit" Binding="{Binding Bom_Unit}"/>
    <DataGridTextColumn Header="Material" Binding="{Binding Bom_Material}"/>
    <DataGridTextColumn Header="Type" Binding="{Binding Bom_Structure}"/>
    <DataGridTextColumn Header="State" Binding="{Binding _State}"/>
    <DataGridTextColumn Header="Revision" Binding="{Binding _Revision}"/>
    <DataGridTextColumn Header="Name" Binding="{Binding _Name}"/>
    <DataGridTextColumn Header="Title" Binding="{Binding _Title}"/>
    <DataGridTextColumn Header="Description" Binding="{Binding _Description}"/>
  </DataGrid.Columns>
</DataGrid>
</UserControl>

You can take the code above and respectively replace the code in the according files. Restart Vault, and enjoy!

I know, you want to sort the BOM. Well, this is a bit tricky, but is doable. The code in the FileBOM.ps1 must be tweaked a bit. There are the rows responsible for sorting

$dsWindow.FindName("bomList").Items.SortDescriptions.Clear()
$sort = New-Object System.ComponentModel.SortDescription("BOM_PositionNumber","Ascending")
$dsWindow.FindName("bomList").Items.SortDescriptions.Add($sort)
$dsWindow.FindName("bomList").Items.Refresh()

As you can see, in the second row you define the property over which you like to sort and the sort direction.

Oops, the BOM is sorted, but the sort order is alphabetical and not numeric. Well, the BOM_PositionNumber is a string, so let’s transform it into a number

$bom | ForEach-Object {
$_.BOM_PositionNumber = [int]$($_.BOM_PositionNumber)
}

Here the complete code:

function GetFileBOM($fileID)
{
  $file = Get-VaultFile -FileId $fileID
  $bom = Get-VaultFileBOM -File $file._FullPath -GetChildrenBy ExactVersion #ExactVersion,LatestVersion,LatestReleasedVersion,LatestReleasedVersionOfRevision
  $bom | ForEach-Object {
    $_.BOM_PositionNumber = [int]$($_.BOM_PositionNumber)
  }
  $dsWindow.FindName("bomList").DataContext = $bom
  $dsWindow.FindName("bomList").Items.SortDescriptions.Clear()
  $sort = New-Object System.ComponentModel.SortDescription("BOM_PositionNumber","Ascending")
  $dsWindow.FindName("bomList").Items.SortDescriptions.Add($sort)
  $dsWindow.FindName("bomList").Items.Refresh()
  return $bom
}

As you can see, we get the BOM, cycle through it and set the Position to be numeric, pass the BOM to the XAML dialog and then sort the property we like. That’s it.

If you want to add additional properties in the grid, just go ahead and change the XAML. If you like to see which other properties are available, set the AutoGenerateColumns to True. All the properties provided by the Get-VaultFileBOM will become visible. You can then pick the one you like, add them to your list and then set the AutoGenerateColumns back to False.

I hope you enjoy your new CAD BOM tab!

Posted in Data Standard, powerVault | Leave a comment

powerEvents

 

Happy new Year!!! We start the new year with a gift to you. A new product: powerEvents. The short story is that now all the Vault client events, such as lifecycle change, check-in, check-out, etc. can be easily consumed via a PowerShell script.

Why should you care? Let’s suppose you like to fill or empty some properties automatically on a lifecycle transition. Or want to set some properties, maybe taken from the parent folder, when a file is checked in. Or you may want to queue a job for just a specific type of file or category on lifecycle transition. Or you want to prevent a lifecycle transition when a combination of property values does not match. Or save item and BOM information into a file every time the item changes. Or perform some custom actions when a folder gets created. Or…. I think you get it.

The Vault client provides events for a lot of actions, such as check-in, -out, lifecycle change, etc. So far, in order to get advantage of such events, you’ve had to write .Net code with Visual Studio. With powerEvents, you can now just put some simple logic in a PowerShell script file and powerEvents will do the rest.

powerEvents is a Vault client extension, so the setup needs to be installed on each Vault client. After the installation, you’ll have under c:\ProgramData\coolOrange\powerEvents the PowerShell scripts. We already exposed all available events in the scripts, and grouped the events by object type. So, for instance, you have a file.ps1 which contains all the events related to files. If you have logic that might be useful across several actions, then you can place it into the common.psm1 file.

By default, powerEvents listen to all the events. In the script files you can see all the available events and start doing some tests. For each event, you have 3 functions, a GetRestriction, Pre and Post. The GetRestriction is the first function called, and allows you to prevent the execution of an event. Let’s say you don’t want to allow lifecycle transitions, then you can go into the file.ps1, scroll down to the GetFileStateChangeRestriction function, and set there a restriction, like this:

function GetFileStateChangeRestriction($FromFiles, $ToFiles, $Restrictions)
{
    $Restrictions.AddRestriction("my object","my custom restriction")
}

In this case, every attempt to perform any lifecycle transition with any file will be blocked and you’ll get the usual Vault dialog telling you which object (file, item, etc.) is affected by which restriction.

In case there are no restrictions, then the Pre function is executed. In case of the file lifecycle state change, the function is the PreFileStateChange. Here, you can perform actions before the lifecycle transition is performed. At this point you cannot stop the event any more, but you can do something with the object or other Vault objects.

At the end the Post function is called, in our case the PostFileStateChange. At this stage, the event is completed, and you can do something with the according object afterword. To be more specific, if you like to clean up some properties when you move from Release back to Work In Progress, then you have to apply you code in the Post function, as during the Pre, the file is still released and cannot be changed. If you like to set some properties on releasing a file, then you’ll have to do it during the Pre, while the file is still Work In Progress, and not during the Post, as at this stage the file is already released and blocked by the permissions.

Within these functions you have access to the Vault API via the usual $Vault variable, and you can make us of powerVault, which simplifies dealing with Vault. You can access to the complete Vault, so for instance, while you react on a file event, you can access to the folder or items, and do something, like creating a link, adding, modifying, etc. With a bit more work, you can also interact with the user via some dialogs and the like.

By leveraging powerGate, you can also interact with your ERP system. As an example, you could check during lifecycle change, whether the part number is filled and an according item exists in the ERP and is in the correct state. If not, you can interrupt the lifecycle and inform the user, that the expected pre-requisites are not given.

While you are playing with powerEvents, you can put in the so called DebugMode. This is a setting in the C:\ProgramData\Autodesk\Vault 2017\Extensions\powerEvents\powerEvents.dll.config file. If the DebugMode is set to True, then each change made in the PowerShell scripts has immediate effect. You don’t need to restart Vault. This allows you to test and tune the code, without constantly restarting Vault. Once you are done with testing and ready for production, set the DebugMode back to False. This has some significant performance improvements, as the scripts will not be read every time and the PowerShell runspace will also be reused, and not restarted with every event.

Also, every line of code has his cost. Therefore, you can decide which events you like to use and which not. Those that are not relevant to you, should be either commented or removed. At the next start of Vault, powerEvents will just register the events present in the script. By default, all the events are active, so you will have to manually comment or remove the unnecessary events.

powerEvents is now available as a Beta version and can be downloaded from here: http://www.coolorange.com/en/beta.php

Being a Beta version, it comes with no need for license, and with a limited life time that expires on August 1st. If you like the product and need pricing information, reach out to sales@coolorange.com.

We have powerEvents already in action at some customers and collected great feedback. However, before we officially launch this product in April with the other 2018 products, we like to get some more feedback from you. So, anything that comes up to your mind, feel free to send an email to support@coolorange.com with your questions, comments, suggestions.

We already love powerEvents, and so do the customers and resellers that supported us in the development of this tool. We are sure you will love it too and find many ways to improve your Vault workflows via powerEvents.

Posted in powerEvents | 2 Comments

A “simple” Vault import tool

2016-12-09_15-07-27

The Vault Autoloader is the usual tool used for the initial bulk import of data into Vault. Simple, efficient, but not flexible. Her we present an alternative. Welcome the bcpMaker.

Since Vault 2011, the VaultBCP (aka Data Transfer Utility, short DTU) let’s you import (and export) data from and to Vault. VaultBCP is a command line tool developed by the Autodesk Vault team, which exports the Vault data base to a set of XML files and permits to reimport such XML files into another Vault. We use a lot this technique for migrating from potentially any other system to Vault. We also use it for “cleaning up” Vaults or merging Vaults together. Unfortunately, there is no documentation and the tool is not officially supported, although widely used, so we had to figure out our self how this works. Over the past years we developed the bcpToolkit, which contains the bcpDevkit, necessary for creating custom VaultBCP packages, and the bcpChecker, which let you preview, check and manipulate VaultBCP packages.

The bcpDevkit is a .Net assembly developed for developers that makes it simple to create custom VaultBCP packages. We use this our self for custom migration projects from SmarTeam, TeamCenter, Enovia, Meridian, BlueCielo, and the like. Migrating from all these systems is possible with a reasonable effort, including all the history and without data loss! The documentation of the bcpDevkit is on our wiki.

Over the past year, we have been asked for a simple alternative to the Vault Autoloader. Something that let’s you pick a folder and import everything into Vault, with some custom logic. The requirements are to set the file in the appropriate category, lifecycle, state and properties. Also the folder structure should get some more content and details. And maybe combine the files from the source folder with some metadata from the ERP, Excel file or the like.

With the bcpDevkit anyone with some development skills could create such an import tool. Actually, the result is a VaulBCP package which can be previewed with the bcpChecker and then imported into Vault. However, it requires some .Net development skills.

Based on the recent basic requirements, we’ve created the bcpMaker. It’s a simple .Net command line application, which takes all content from a given folder and transforms it into a VaultBCP package, that can be imported into Vault. For the basic configuration, there is a small XML config file that allows you to define file types you like to exclude (.bak, .tmp, etc.), folder that should be ignored (OldVersions, _V, etc.), which category shall be applied for which file extension, and more. The bcpMaker starts by collecting all files, adding them to a VaultBCP package and for all the Inventor files, the references will be recreated. So, the resulting package can be imported into Vault and all the Inventor references will be fine. In case where the Inventor references could not be resolved during the creation of the package, a log file reports the problems and you can fix them or not. Yes, you can also ignore the problems and just import the files in the given quality, according to the motto “shit in, shit out”. In other words, if an assembly has some references that can be resolved and some references that could not be resolved, then the good references will be in Vault after the import and the bad references will be reported in the log file. When you open the assembly from within Vault, you will be prompted to fix the problems, but at least you have the good part imported.

With the bcpMaker we also deliver the source code, which is just a sample implementation of the bcpDevkit. So, you are free to tweak the code, smarten the logic and also extend the capabilities. If you don’t feel comfortable doing that, then reach out to us, and we can create a custom version for you, based on your specific requirements. However, if you prepare a folder with all the content you like to import, and structure the content the way you think is appropriate, then just run the bcpMaker, check the outcome with the bcpChecker and then run the import against you Vault.

We strongly recommend to prepare the Vault before with all the given behaviors and let the VaultBCP package import against a configured Vault. This way, if the VaultBCP package contains some unpredicted settings, it will fail and not create the behaviors. You know, that once behaviors, such as categories, lifecycle, properties, etc., are created in Vault and used, it’s not possible to remove them. In order to prevent misconfigurations due to an import, we suggest to configure the Vault first, disable the automatic behavior creation and run the import.

Anyway, we hope that with the bcpMaker, importing even huge amount of data (1.000.000+ records) becomes simple and smart. And in case you need something custom, just reach out to us. If you like to try the bcpMaker, here is the download, and its source code. For the VaultBCP, you need to contact Autodesk.

Posted in Vault BCP | Leave a comment

Cloudy times… part 3

forge

The third API we investigated is the Viewer. It allows you to display 2D and 3D models within your web browser in a very simple way. In order to show the CAD data, you need to convert your file into a compatible format (SVF), which can be done via the Model Derivative API. The Model Derivative API is quite simple and allows you to translate a long list of different file types in different formats. A complete overview can be found here.

While the conversation into a viewable format has a cost, viewing the translated model has no costs. The translated model usually resides on the Autodesk cloud, in a so called bucket and you can define how long the file shall stay there (24 hours, 30 days, or forever). For details about the retention period, check this link. However, the SVF file does not necessary have to stay on the Autodesk cloud. You could also have it on your own location, such as Amazon S3, DropBox, etc. or even locally and make it available through your firewall.

Technically the viewer could also be used on a local intranet, but an internet access on each client is mandatory, as all the components are loaded from the web and also the viewing file must be accessible to the viewer engine.

In order to embed the viewer and get your file visible, the steps are very simple. There are excellent tutorials that brings you step by step through the process.

While getting your file viewed is very simple, creating more advanced viewing features requires more effort. There are standard menus, functions and dialogs that can be quickly activated, such as for exploding the model, getting the component structure, properties and the like. That works well and quick. After that, it’s up to you. The viewer API provides a ton of functions and events, so you can really create very cool stuff.

We had not the time to go down the whole rabbit hole, so what we post here is surely just a subset of what the viewer can do for you. It is possible to change color, highlight, set transparency of components programmatically. Or to select components in the viewing space or the component tree. You also get events on the selected object and so you can react on user actions, by emphasizing components or adding additional capabilities.

One thing we noticed is that for Inventor files, you get access to all the file properties and also the display name of the browser, like Screw:1. This way, it’s possible to understand which component or instance is meant and for instance use the Design Automation API for executing custom code, as soon also Inventor will be supported.

All the functions like zoom, rotate, pan, etc. are accessible via the API, so it’s possible to think on scenarios where the showed model can be positioned programmatically, elements can be highlighted, hidden, colored, etc. in order to create a very custom experience. There are examples for creating virtual reality scenarios with google cardboard and much more.

Our impression is that the possibilities are endless, as the core API provide access to all relevant data and so it’s just about you creating the web page with according java script for doing the cool stuff.

In the short time, we could just find one limitation. We wanted to access the features of a component. Let’s say you have an Inventor part and want to figure out if there are holes in it. This information is not there in the viewer, as during the conversion into SVF, the geometry is translated into mashes and loses the underlying intelligence. It is possible to access the edges and so identify circular edges and the like, but with curved surfaces this becomes a mess. We don’t have yet a real customer scenario where this limitation might become an issue, so at the moment it’s just a note.

Our take away is that the API feels mature, very capable, very powerful, with lots of functions. There are many examples so that probably there is already a piece of code for the thing you like to do. We can see many opportunities where the viewer can be used in order to create an interactive experience for customers for better understanding the product, provide feedback, fix problems, and more.

Posted in Forge | Leave a comment

Cloudy times… part 2

forge

In our previous post we spoke about our findings on the Data Management API, during our week in Munich at the Autodesk Forge Accelerator event. Here we like to share our findings on the Design Automation API, formerly known as AutoCAD I/O.

The Design Automation API allows you to process your CAD files, with custom code, without the need of the according CAD application. At the moment just AutoCAD, but as the API has been renamed from AutoCAD I/O to Design Automation, it lets presume that further applications such as Inventor, Revit and probably more will be supported soon. As an example, if you like to transform your CAD file into PDF, or replace the title block, or even do extensive geometrical operations, you can do this with the Design Automation API. To put is simple, it’s an AutoCAD (actually AutoCAD Core Console) in the cloud, without a user interface.

In order to use the Design Automation API, you need obviously either an Autodesk developer account in order to create your own application, or you allow a 3rd party application to run under your Autodesk account. Anyway, the way the Design Automation API works is pretty simple.

You define a so called activity where you define the input arguments, for instance the DWG file that should be processed, and maybe also some additional arguments like an additional template or supporting files. Also you define the output arguments, like the resulting DWG file. Additionally, you define your AppPackage, which contains your source code, which could be in .Net, ARX or Lisp, that will do the actual operation.

Let’s suppose you have a ton of legacy AutoCAD files and you want to update them all by replacing the title block with a new one, maybe adjust the layers, line styles, etc. Your input arguments might be the given legacy DWG file and additional DWG or DWT/DWS as templates. Your output argument will be the processed DWG file, and your AppPackage will be the special code for doing the clean-up.

So, the activity is something you define once, and then, for each DWG file you will process a so called work item against the activity. The work item is the actual job that will be executed and based on the selected activity, it will require the input arguments and send back the result to the output argument.

Now, both the input and output arguments are URLs, therefore the files must be in the cloud, either on A360, DropBox, OneDrive, Amazon S3 (Simple Storage Service), etc., or you might open a connection to your files through your firewall. The files will be not saved by the Design Automation API. They will be just processed and neither stored or cached or the like. Therefore, the operation is quite secure.

We played a while with the Data Management API and wanted to see how stable and reliable it works. We tried to process an empty file, a corrupted DWG file (we deliberately removed some bytes from the DWG file), a picture renamed as DWG, a real DWG named with a different extension, a DWG with X-Refs but did not uploaded the X-Refs, tried timeouts and some other dirty things. The Design Automation API always came back with a solid and descriptive error message. Well done!!!

We wanted to know, in case where we need to process thousands and thousands of files, can we rely on this thing? Do we get back meaningful error-messages so that we can point the customer to the problematic files with a clear problem description? So far the answer is yes! This is impressive, as the same type of task on the desktop would be a mess. We would need to deal with crashed, dialogs, and the like, and processing more then one file at once, would mean to deal with parallelisation. With Design Automation, all this comes for free.

Btw, if you wonder whether just single DWG files or also DWG files with X-Refs and OLE-references can be processed, the answer is of course yes! There is either the possibility to send eTransfer DWG files to the Design Automation API, where all the referenced objects are either zipped or embedded, or the input arguments of the activity can also accept a sort of list of files.

In terms of processing lots of file, there is a so called quota that limits the amount of files you can process (5 per minute) and the execution time (5 min.). While the amount of files processed per minute can be increased on demand, the execution time cannot, which makes sense, as any complex operation should be doable in less than 5 min. To better understand the number of files per minute, this is related to a user. So, let’s take the example of before where we want to process thousands of files. If this happen under one user account, then 5/min. is the limit. But if we have, let’s say 10 users under which we let the app will run, then each user will have the 5/min. limitation, which results in processing 50 files per minute. If you want to know more about the quote, you can check out this link: https://developer.autodesk.com/en/docs/design-automation/v2/overview/quotas/

The documentation states something about “no rollover”, which means that if you process just 3 files per minute, then you cannot process the spared 2 files in the next minute, or if you purchased a certain amount of credits for processing a certain amount of files and you don’t consume the credits, those will be not refunded or taken over to the next month or payment period. The pricing is still under evaluation, so this might change over time, or maybe not, let’s see. However, if you have a file processing while reaching the quota limit, the file will be still processed.

Besides the limitation in amount and time, there are other security limitation, to ensure that the application runs smooth and not affect or infect other files. The Design Automation API runs as so called “sandboxed”. It means that each session is independent from another and it’s limited in his capabilities. For example, the code you’ve submitted via the AppPackage, cannot access the underlying operation system, or start additional applications, or access the internet. Therefore, your AppPackage must contain all you need for executing the code and cannot download additional stuff or run code outside of the Design Automation API. We did not had too much time, but of course we tried to jailbreak, without success. Actually, we had the pleasure to meet a very cool Autodesk guy, Tekno Tandean, which is responsible for security for the AutoCAD I/O and we had a very interesting conversation about how security works with the Design Automation API, and we can just say that Autodesk is taking this very seriously and so far they did a great job!!!

We also had the pleasure to meet Krithika Prabhu, senior product manager for AutoCAD I/O. She is very interested to enhance the AutoCAD I/O feature set. At the moment there is a standard AppPackage for creating a PDF. So, creating an activity for that is super simple. However, Krithika looks to further opportunities to provide simple functions for typical scenarios in order to make it very simple for us developer to get started with this API and get more done with less. I’m sure we are just scratching the surface and more cool stuff will come quite soon.

Our take away is that compared to the Data Management API, the Design Automation API looks way more mature, which is obvious as this API is around for more than 2 years. It’s well documented with a lot of sample, and it’s really fun working with. We have been able to get the API working in very short time. We can see already a lot of scenarios where this API can be used, either for mass file operations, creating simplified workflows, such as one click configurations, custom conversion in other formats, and much more. While we are investigating which products we can create for you, in case you have any need in the area of AutoCAD and his verticals, let’s have a chat, as this stuff is pretty cool!

Posted in Forge | 1 Comment