powerEvents

 

Happy new Year!!! We start the new year with a gift to you. A new product: powerEvents. The short story is that now all the Vault client events, such as lifecycle change, check-in, check-out, etc. can be easily consumed via a PowerShell script.

Why should you care? Let’s suppose you like to fill or empty some properties automatically on a lifecycle transition. Or want to set some properties, maybe taken from the parent folder, when a file is checked in. Or you may want to queue a job for just a specific type of file or category on lifecycle transition. Or you want to prevent a lifecycle transition when a combination of property values does not match. Or save item and BOM information into a file every time the item changes. Or perform some custom actions when a folder gets created. Or…. I think you get it.

The Vault client provides events for a lot of actions, such as check-in, -out, lifecycle change, etc. So far, in order to get advantage of such events, you’ve had to write .Net code with Visual Studio. With powerEvents, you can now just put some simple logic in a PowerShell script file and powerEvents will do the rest.

powerEvents is a Vault client extension, so the setup needs to be installed on each Vault client. After the installation, you’ll have under c:\ProgramData\coolOrange\powerEvents the PowerShell scripts. We already exposed all available events in the scripts, and grouped the events by object type. So, for instance, you have a file.ps1 which contains all the events related to files. If you have logic that might be useful across several actions, then you can place it into the common.psm1 file.

By default, powerEvents listen to all the events. In the script files you can see all the available events and start doing some tests. For each event, you have 3 functions, a GetRestriction, Pre and Post. The GetRestriction is the first function called, and allows you to prevent the execution of an event. Let’s say you don’t want to allow lifecycle transitions, then you can go into the file.ps1, scroll down to the GetFileStateChangeRestriction function, and set there a restriction, like this:

function GetFileStateChangeRestriction($FromFiles, $ToFiles, $Restrictions)
{
    $Restrictions.AddRestriction("my object","my custom restriction")
}

In this case, every attempt to perform any lifecycle transition with any file will be blocked and you’ll get the usual Vault dialog telling you which object (file, item, etc.) is affected by which restriction.

In case there are no restrictions, then the Pre function is executed. In case of the file lifecycle state change, the function is the PreFileStateChange. Here, you can perform actions before the lifecycle transition is performed. At this point you cannot stop the event any more, but you can do something with the object or other Vault objects.

At the end the Post function is called, in our case the PostFileStateChange. At this stage, the event is completed, and you can do something with the according object afterword. To be more specific, if you like to clean up some properties when you move from Release back to Work In Progress, then you have to apply you code in the Post function, as during the Pre, the file is still released and cannot be changed. If you like to set some properties on releasing a file, then you’ll have to do it during the Pre, while the file is still Work In Progress, and not during the Post, as at this stage the file is already released and blocked by the permissions.

Within these functions you have access to the Vault API via the usual $Vault variable, and you can make us of powerVault, which simplifies dealing with Vault. You can access to the complete Vault, so for instance, while you react on a file event, you can access to the folder or items, and do something, like creating a link, adding, modifying, etc. With a bit more work, you can also interact with the user via some dialogs and the like.

By leveraging powerGate, you can also interact with your ERP system. As an example, you could check during lifecycle change, whether the part number is filled and an according item exists in the ERP and is in the correct state. If not, you can interrupt the lifecycle and inform the user, that the expected pre-requisites are not given.

While you are playing with powerEvents, you can put in the so called DebugMode. This is a setting in the C:\ProgramData\Autodesk\Vault 2017\Extensions\powerEvents\powerEvents.dll.config file. If the DebugMode is set to True, then each change made in the PowerShell scripts has immediate effect. You don’t need to restart Vault. This allows you to test and tune the code, without constantly restarting Vault. Once you are done with testing and ready for production, set the DebugMode back to False. This has some significant performance improvements, as the scripts will not be read every time and the PowerShell runspace will also be reused, and not restarted with every event.

Also, every line of code has his cost. Therefore, you can decide which events you like to use and which not. Those that are not relevant to you, should be either commented or removed. At the next start of Vault, powerEvents will just register the events present in the script. By default, all the events are active, so you will have to manually comment or remove the unnecessary events.

powerEvents is now available as a Beta version and can be downloaded from here: http://www.coolorange.com/en/beta.php

Being a Beta version, it comes with no need for license, and with a limited life time that expires on August 1st. If you like the product and need pricing information, reach out to sales@coolorange.com.

We have powerEvents already in action at some customers and collected great feedback. However, before we officially launch this product in April with the other 2018 products, we like to get some more feedback from you. So, anything that comes up to your mind, feel free to send an email to support@coolorange.com with your questions, comments, suggestions.

We already love powerEvents, and so do the customers and resellers that supported us in the development of this tool. We are sure you will love it too and find many ways to improve your Vault workflows via powerEvents.

Posted in powerEvents | Leave a comment

A “simple” Vault import tool

2016-12-09_15-07-27

The Vault Autoloader is the usual tool used for the initial bulk import of data into Vault. Simple, efficient, but not flexible. Her we present an alternative. Welcome the bcpMaker.

Since Vault 2011, the VaultBCP (aka Data Transfer Utility, short DTU) let’s you import (and export) data from and to Vault. VaultBCP is a command line tool developed by the Autodesk Vault team, which exports the Vault data base to a set of XML files and permits to reimport such XML files into another Vault. We use a lot this technique for migrating from potentially any other system to Vault. We also use it for “cleaning up” Vaults or merging Vaults together. Unfortunately, there is no documentation and the tool is not officially supported, although widely used, so we had to figure out our self how this works. Over the past years we developed the bcpToolkit, which contains the bcpDevkit, necessary for creating custom VaultBCP packages, and the bcpChecker, which let you preview, check and manipulate VaultBCP packages.

The bcpDevkit is a .Net assembly developed for developers that makes it simple to create custom VaultBCP packages. We use this our self for custom migration projects from SmarTeam, TeamCenter, Enovia, Meridian, BlueCielo, and the like. Migrating from all these systems is possible with a reasonable effort, including all the history and without data loss! The documentation of the bcpDevkit is on our wiki.

Over the past year, we have been asked for a simple alternative to the Vault Autoloader. Something that let’s you pick a folder and import everything into Vault, with some custom logic. The requirements are to set the file in the appropriate category, lifecycle, state and properties. Also the folder structure should get some more content and details. And maybe combine the files from the source folder with some metadata from the ERP, Excel file or the like.

With the bcpDevkit anyone with some development skills could create such an import tool. Actually, the result is a VaulBCP package which can be previewed with the bcpChecker and then imported into Vault. However, it requires some .Net development skills.

Based on the recent basic requirements, we’ve created the bcpMaker. It’s a simple .Net command line application, which takes all content from a given folder and transforms it into a VaultBCP package, that can be imported into Vault. For the basic configuration, there is a small XML config file that allows you to define file types you like to exclude (.bak, .tmp, etc.), folder that should be ignored (OldVersions, _V, etc.), which category shall be applied for which file extension, and more. The bcpMaker starts by collecting all files, adding them to a VaultBCP package and for all the Inventor files, the references will be recreated. So, the resulting package can be imported into Vault and all the Inventor references will be fine. In case where the Inventor references could not be resolved during the creation of the package, a log file reports the problems and you can fix them or not. Yes, you can also ignore the problems and just import the files in the given quality, according to the motto “shit in, shit out”. In other words, if an assembly has some references that can be resolved and some references that could not be resolved, then the good references will be in Vault after the import and the bad references will be reported in the log file. When you open the assembly from within Vault, you will be prompted to fix the problems, but at least you have the good part imported.

With the bcpMaker we also deliver the source code, which is just a sample implementation of the bcpDevkit. So, you are free to tweak the code, smarten the logic and also extend the capabilities. If you don’t feel comfortable doing that, then reach out to us, and we can create a custom version for you, based on your specific requirements. However, if you prepare a folder with all the content you like to import, and structure the content the way you think is appropriate, then just run the bcpMaker, check the outcome with the bcpChecker and then run the import against you Vault.

We strongly recommend to prepare the Vault before with all the given behaviors and let the VaultBCP package import against a configured Vault. This way, if the VaultBCP package contains some unpredicted settings, it will fail and not create the behaviors. You know, that once behaviors, such as categories, lifecycle, properties, etc., are created in Vault and used, it’s not possible to remove them. In order to prevent misconfigurations due to an import, we suggest to configure the Vault first, disable the automatic behavior creation and run the import.

Anyway, we hope that with the bcpMaker, importing even huge amount of data (1.000.000+ records) becomes simple and smart. And in case you need something custom, just reach out to us. If you like to try the bcpMaker, here is the download, and its source code. For the VaultBCP, you need to contact Autodesk.

Posted in Vault BCP | Leave a comment

Cloudy times… part 3

forge

The third API we investigated is the Viewer. It allows you to display 2D and 3D models within your web browser in a very simple way. In order to show the CAD data, you need to convert your file into a compatible format (SVF), which can be done via the Model Derivative API. The Model Derivative API is quite simple and allows you to translate a long list of different file types in different formats. A complete overview can be found here.

While the conversation into a viewable format has a cost, viewing the translated model has no costs. The translated model usually resides on the Autodesk cloud, in a so called bucket and you can define how long the file shall stay there (24 hours, 30 days, or forever). For details about the retention period, check this link. However, the SVF file does not necessary have to stay on the Autodesk cloud. You could also have it on your own location, such as Amazon S3, DropBox, etc. or even locally and make it available through your firewall.

Technically the viewer could also be used on a local intranet, but an internet access on each client is mandatory, as all the components are loaded from the web and also the viewing file must be accessible to the viewer engine.

In order to embed the viewer and get your file visible, the steps are very simple. There are excellent tutorials that brings you step by step through the process.

While getting your file viewed is very simple, creating more advanced viewing features requires more effort. There are standard menus, functions and dialogs that can be quickly activated, such as for exploding the model, getting the component structure, properties and the like. That works well and quick. After that, it’s up to you. The viewer API provides a ton of functions and events, so you can really create very cool stuff.

We had not the time to go down the whole rabbit hole, so what we post here is surely just a subset of what the viewer can do for you. It is possible to change color, highlight, set transparency of components programmatically. Or to select components in the viewing space or the component tree. You also get events on the selected object and so you can react on user actions, by emphasizing components or adding additional capabilities.

One thing we noticed is that for Inventor files, you get access to all the file properties and also the display name of the browser, like Screw:1. This way, it’s possible to understand which component or instance is meant and for instance use the Design Automation API for executing custom code, as soon also Inventor will be supported.

All the functions like zoom, rotate, pan, etc. are accessible via the API, so it’s possible to think on scenarios where the showed model can be positioned programmatically, elements can be highlighted, hidden, colored, etc. in order to create a very custom experience. There are examples for creating virtual reality scenarios with google cardboard and much more.

Our impression is that the possibilities are endless, as the core API provide access to all relevant data and so it’s just about you creating the web page with according java script for doing the cool stuff.

In the short time, we could just find one limitation. We wanted to access the features of a component. Let’s say you have an Inventor part and want to figure out if there are holes in it. This information is not there in the viewer, as during the conversion into SVF, the geometry is translated into mashes and loses the underlying intelligence. It is possible to access the edges and so identify circular edges and the like, but with curved surfaces this becomes a mess. We don’t have yet a real customer scenario where this limitation might become an issue, so at the moment it’s just a note.

Our take away is that the API feels mature, very capable, very powerful, with lots of functions. There are many examples so that probably there is already a piece of code for the thing you like to do. We can see many opportunities where the viewer can be used in order to create an interactive experience for customers for better understanding the product, provide feedback, fix problems, and more.

Posted in Forge | Leave a comment

Cloudy times… part 2

forge

In our previous post we spoke about our findings on the Data Management API, during our week in Munich at the Autodesk Forge Accelerator event. Here we like to share our findings on the Design Automation API, formerly known as AutoCAD I/O.

The Design Automation API allows you to process your CAD files, with custom code, without the need of the according CAD application. At the moment just AutoCAD, but as the API has been renamed from AutoCAD I/O to Design Automation, it lets presume that further applications such as Inventor, Revit and probably more will be supported soon. As an example, if you like to transform your CAD file into PDF, or replace the title block, or even do extensive geometrical operations, you can do this with the Design Automation API. To put is simple, it’s an AutoCAD (actually AutoCAD Core Console) in the cloud, without a user interface.

In order to use the Design Automation API, you need obviously either an Autodesk developer account in order to create your own application, or you allow a 3rd party application to run under your Autodesk account. Anyway, the way the Design Automation API works is pretty simple.

You define a so called activity where you define the input arguments, for instance the DWG file that should be processed, and maybe also some additional arguments like an additional template or supporting files. Also you define the output arguments, like the resulting DWG file. Additionally, you define your AppPackage, which contains your source code, which could be in .Net, ARX or Lisp, that will do the actual operation.

Let’s suppose you have a ton of legacy AutoCAD files and you want to update them all by replacing the title block with a new one, maybe adjust the layers, line styles, etc. Your input arguments might be the given legacy DWG file and additional DWG or DWT/DWS as templates. Your output argument will be the processed DWG file, and your AppPackage will be the special code for doing the clean-up.

So, the activity is something you define once, and then, for each DWG file you will process a so called work item against the activity. The work item is the actual job that will be executed and based on the selected activity, it will require the input arguments and send back the result to the output argument.

Now, both the input and output arguments are URLs, therefore the files must be in the cloud, either on A360, DropBox, OneDrive, Amazon S3 (Simple Storage Service), etc., or you might open a connection to your files through your firewall. The files will be not saved by the Design Automation API. They will be just processed and neither stored or cached or the like. Therefore, the operation is quite secure.

We played a while with the Data Management API and wanted to see how stable and reliable it works. We tried to process an empty file, a corrupted DWG file (we deliberately removed some bytes from the DWG file), a picture renamed as DWG, a real DWG named with a different extension, a DWG with X-Refs but did not uploaded the X-Refs, tried timeouts and some other dirty things. The Design Automation API always came back with a solid and descriptive error message. Well done!!!

We wanted to know, in case where we need to process thousands and thousands of files, can we rely on this thing? Do we get back meaningful error-messages so that we can point the customer to the problematic files with a clear problem description? So far the answer is yes! This is impressive, as the same type of task on the desktop would be a mess. We would need to deal with crashed, dialogs, and the like, and processing more then one file at once, would mean to deal with parallelisation. With Design Automation, all this comes for free.

Btw, if you wonder whether just single DWG files or also DWG files with X-Refs and OLE-references can be processed, the answer is of course yes! There is either the possibility to send eTransfer DWG files to the Design Automation API, where all the referenced objects are either zipped or embedded, or the input arguments of the activity can also accept a sort of list of files.

In terms of processing lots of file, there is a so called quota that limits the amount of files you can process (5 per minute) and the execution time (5 min.). While the amount of files processed per minute can be increased on demand, the execution time cannot, which makes sense, as any complex operation should be doable in less than 5 min. To better understand the number of files per minute, this is related to a user. So, let’s take the example of before where we want to process thousands of files. If this happen under one user account, then 5/min. is the limit. But if we have, let’s say 10 users under which we let the app will run, then each user will have the 5/min. limitation, which results in processing 50 files per minute. If you want to know more about the quote, you can check out this link: https://developer.autodesk.com/en/docs/design-automation/v2/overview/quotas/

The documentation states something about “no rollover”, which means that if you process just 3 files per minute, then you cannot process the spared 2 files in the next minute, or if you purchased a certain amount of credits for processing a certain amount of files and you don’t consume the credits, those will be not refunded or taken over to the next month or payment period. The pricing is still under evaluation, so this might change over time, or maybe not, let’s see. However, if you have a file processing while reaching the quota limit, the file will be still processed.

Besides the limitation in amount and time, there are other security limitation, to ensure that the application runs smooth and not affect or infect other files. The Design Automation API runs as so called “sandboxed”. It means that each session is independent from another and it’s limited in his capabilities. For example, the code you’ve submitted via the AppPackage, cannot access the underlying operation system, or start additional applications, or access the internet. Therefore, your AppPackage must contain all you need for executing the code and cannot download additional stuff or run code outside of the Design Automation API. We did not had too much time, but of course we tried to jailbreak, without success. Actually, we had the pleasure to meet a very cool Autodesk guy, Tekno Tandean, which is responsible for security for the AutoCAD I/O and we had a very interesting conversation about how security works with the Design Automation API, and we can just say that Autodesk is taking this very seriously and so far they did a great job!!!

We also had the pleasure to meet Krithika Prabhu, senior product manager for AutoCAD I/O. She is very interested to enhance the AutoCAD I/O feature set. At the moment there is a standard AppPackage for creating a PDF. So, creating an activity for that is super simple. However, Krithika looks to further opportunities to provide simple functions for typical scenarios in order to make it very simple for us developer to get started with this API and get more done with less. I’m sure we are just scratching the surface and more cool stuff will come quite soon.

Our take away is that compared to the Data Management API, the Design Automation API looks way more mature, which is obvious as this API is around for more than 2 years. It’s well documented with a lot of sample, and it’s really fun working with. We have been able to get the API working in very short time. We can see already a lot of scenarios where this API can be used, either for mass file operations, creating simplified workflows, such as one click configurations, custom conversion in other formats, and much more. While we are investigating which products we can create for you, in case you have any need in the area of AutoCAD and his verticals, let’s have a chat, as this stuff is pretty cool!

Posted in Forge | 1 Comment

Cloudy times…

forge

This week we are attending the Forge Accelerator event in Munich, together with a lot of great Autodesk people and other Forge fans like us. Btw., Forge is the name under which Autodesk combines all the new cloud APIs. On https://developer.autodesk.com you can find all the available APIs and documentation and on https://forge.autodesk.com you can find more details about Forge.

This week we gave a closer look to the Data Management API, Design Automation API and Viewer API. Today I’ll give you our outcome from our findings about the Data Management API.

The Data Management API is responsible to deal with file objects, upload, download, set references and the like. Surely it will be soon extended with more capabilities.

From a user prospective there is a bit of confusion, as there is A360 Drive, which is a pure file collaboration service similar to DropBox, OneDrive, GoogleDrive, etc., then there is A360, and then there is the Bim Team and Fusion Team. Fore more clearance about the naming, give a look to https://blog.a360.autodesk.com/important-news-regarding-a360-team/.

A360 Drive has a desktop sync tool that syncs your files with it, however there is no API. A360 is accessible through the Forge Data Management API, and we will talk about it here, and now it has a desktop sync tool called Autodesk Drive which you could download through Fusion Lifecycle as seen in this video. Actually, A360, Fusion Team, BIM 360 Team are just the name of the consumer application. The technology underneath is all based on Forge API. So, at the moment it’s all in evolution, but Autodesk is moving very fast, so if you read this blog post in few months, it might be even no longer valid and the product, names and technologies are well streamlined.

Anyway, let’s talk about the Data Management API. It allows you to create hubs, projects, folders, items (files), versions, and references. So, you can create your custom upload tool for bringing all your data into the cloud, the way you want. As an example, we though how could we bring the files from Vault into the cloud. The sync tools (now called Autodesk Drive) will just upload the latest version of your files, but what if you want to take over the whole history? A simple approach could be to export the data from Vault via BCP, which generates a dump of all the data on local disk as XML files, process the BCP package and upload the files including the versions in A360. This way, you would not only get the latest version of your files, but also the whole history. This is pretty cool!

We played with the file version and gave a look whether the files must be uploaded in the right sequence, or if it’s possible to upload older versions to a later point in time. We could not make it work in the short time, but it seems to be possible – we will investigate further. We also played with references. We could recreate the references of an Inventor assembly and they were visible in the web user interface, but however, the references were not resolved, as the viewer was not able to display the assembly. It turned out that at the moment, just the Fusion360 references are supported, however, thanks to one of Autodesk fellows, we could give a sneak peek on an upcoming API extension where also Inventor references are supported. So, I guess that very soon the scenario we are looking for will be possible. Exciting!!!

Our take away is that the Data Management API already offers all the basic functions in order to create projects, upload files, versions, set references, etc. The current documentation is pretty good and there are already lots of examples, and from what we’ve seen, there is more to come – very soon.

Posted in Forge | 1 Comment

powerTree

2016-10-01_12-40-32Have you ever had the need to display, check and process structured data in a custom way? Let’s say you want to release a complete assembly, and want to perform a series of custom checks and actions on that structure. Please welcome powerTree!

I know, we need to find a new technology, so that we can define new product names. But meanwhile our love (and obsession) for PowerShell should be well known. And it will grow, now that PowerShell went open source and it’s available on many platforms, and with PowerShell version 5, debugging capabilities have been introduced.

Anyway, back to our problem. In recent projects, we faced the need to navigate through a structure and perform custom checks and operations. In our particular case, we had to check a quite large assembly, including all drawings, and test whether all the company’s business rules comply. We actually had to release the complete assembly, including drawings, but in case where for some reasons the drawings could not be released, the parent assemblies could also not be release. The default Vault dialog cannot perform this, so we had to invent something new.

Instead of doing something custom, we decided to create a little tool that give us the flexibility to do this kind of customization in a flexible and repeatable way. So, we developed a dialog that shows data as a structure, with configurable actions. All the logic is once again in a PowerShell script, so that we can define and tweak the logic as we need, without affecting the dialog’s source code. This way, the dialog becomes useful in many different situations.

Here is the sample implementation from our project. First, we had to collect the structure of the assembly, including all the drawing. Second, we had to check the compliancy of the components from bottom up. Third, we had to perform cleaning actions on the problematic components.

For collecting the components with drawings, we had to integrate the drawings in the structure, in order to ensure that the upper level assembly can only be OK if all the children and related drawings are OK too. As drawings are parents of the model, we switched the position of drawing and component, so that the drawing becomes the child and the model becomes the child of the drawing. OK, give a look to this picture, it makes it easier:

2016-10-01_08-49-16

In the example above, the assembly 10001.iam contains a child 10018.iam. However, the child 10018.iam, has one or more drawings. Therefore, we switched position between model and drawing, so that the 10018.idw becomes the child of 10001.iam. This way, if we now perform our check and the drawing 10018.idw has a problem, the top assembly 10001.iam cannot be released.

Another requirement was to be able to work in Vault, while the dialog is open. So, in case where the check finds errors, the user can fix the problems in Vault while keeping an eye on the list of problematic files.

The dialog can be extended with custom action, for instance, the “Go To Folder”, which makes it simple to jump to the problematic file in Vault and fix the problem. However, any sort of action can be configured. And the same is with Check and Process. Both buttons run a PowerShell function, therefore what should happen during the check and the process, is up to you.

As mentioned earlier, you can load into the dialog whatever you want, the way you want. It just has to be a structure (parent-child relation). The object types could be mixed. So, here for instance we load the folder structure and the according files in one dialog:

2016-10-01_09-20-47

Here another example, where we load the file structure, like for the first case, but also load the according items:

2016-10-01_12-31-49

Technically, it’s also possible to combine external data sources. So, for instance in the case above, the item might come from the ERP in order to check if every relevant component has a valid item.

So, the possibilities are endless. If you have the need to perform a custom check or custom operations over a custom data structure, then powerTree might be the cost effective way. Talking about costs. This is not a product you can download or purchase. Given the complexity of the topic, we provide access to this tool in the context of a project. So, if you have the need for powerTree, just get in touch with us via sales@coolorange.com and ask about powerTree. We will then discuss with you your requirements and ensure that powerTree is the right tool for you and define how the configuration will be done.

I hope I was able to give you an overview of what powerTree can do for you, and look forward for your feedback.

 

Posted in Data Standard | 1 Comment

Let’s talk…

2016-09-02_10-12-05

Over the past years, we had the pleasure to provide you information, tips, samples, code snippets, and first and foremost ideas on what can be done with Autodesk Data Management and other topics we are working with. This was and still is our primary goal – to light up your imagination on what the products and technologies you already have at your fingertips could do for you, with minimal or reasonable customization effort.

So far your response was great, in terms of page views, subscriptions to this blog and comments. Thanks you very much for following us and give us the energy to continue on this path!

We recognized that especially in reacting on comments, we could do better. Most comments are contributions, corrections or just appreciations. Thanks again for all this support! Some comments are requests for help, as some code does not work as expected, or there have been problems in the interpretation or implementation, or maybe the code is outdated. Reacting on those comments via the comment chat of the blog is a bit cumbersome. Therefore, we activated a forum (http://www.coolorange.com/forum), where topics like this can be better discussed.

Thus, from now on, if you have a comment, contribution, correction or just want to share your appreciation, please continue to use the comment section of this blog. That is the right place for such comments.
In case where you have issues with the posted idea or code sample, or you want to get in discussion with us and others, or want to leave a comment, suggestion idea that is out of topic and does not fit well to any existing blog post, then the forum is the better place. There, we can have a relaxed and more extensive conversation. The conversation will still remain public, and so accessible to everyone and therefore extend the content of the blog.

The forum can be reached via http://www.coolorange.com/forum. While the content is visible as a guest, you will have to sign up for contributing. I do encourage every one of you to take advantage of this additional communication platform.

When using the forum, please keep in mind that the blog posts are deliberately hold short and simple, with the purpose to show the way and create interest and excitement. Therefore, the code published on this blog, although it does work, may not be always immediately applicable in a productive environment. Also, we pursue new topics and ideas, and seldom we update old topics to new versions. However, the forum could be a good place for discussing such things.

Also, keep in mind that both the forum and the blog are free (no costs), although the effort for writing the posts and following up the forum is tangible. Therefore, in case you need a personal and timely assistance, please take advantage of our paid support. They can help you by reviewing your code, do a remote session and give a look on your machine, write a custom code snippet, and more. If you need such time of professional support, just reach out via email to support@coolorange.com and they will then provide more detailed information. However, use the forum to start the conversation.

Also, don’t forget to use the Autodesk Vault Customization forum for Data Standard related topics and other Vault customization issues. We are also quite active there. In case you’re not sure which forum to use for Data Standard questions, the Autodesk forum is always good as, there is already a large VDS community there, it’s visible to Autodesk, and questions and discussions around VDS are quite interesting for a larger audience.

Our new forum, really want to provide a better way to interact on coolOrange’s blog related topics.  We hope you will enjoy the new communication tool. We definitively will enjoy the conversation with you.

Posted in Uncategorized | Leave a comment