Cloudy times… part 3

forge

The third API we investigated is the Viewer. It allows you to display 2D and 3D models within your web browser in a very simple way. In order to show the CAD data, you need to convert your file into a compatible format (SVF), which can be done via the Model Derivative API. The Model Derivative API is quite simple and allows you to translate a long list of different file types in different formats. A complete overview can be found here.

While the conversation into a viewable format has a cost, viewing the translated model has no costs. The translated model usually resides on the Autodesk cloud, in a so called bucket and you can define how long the file shall stay there (24 hours, 30 days, or forever). For details about the retention period, check this link. However, the SVF file does not necessary have to stay on the Autodesk cloud. You could also have it on your own location, such as Amazon S3, DropBox, etc. or even locally and make it available through your firewall.

Technically the viewer could also be used on a local intranet, but an internet access on each client is mandatory, as all the components are loaded from the web and also the viewing file must be accessible to the viewer engine.

In order to embed the viewer and get your file visible, the steps are very simple. There are excellent tutorials that brings you step by step through the process.

While getting your file viewed is very simple, creating more advanced viewing features requires more effort. There are standard menus, functions and dialogs that can be quickly activated, such as for exploding the model, getting the component structure, properties and the like. That works well and quick. After that, it’s up to you. The viewer API provides a ton of functions and events, so you can really create very cool stuff.

We had not the time to go down the whole rabbit hole, so what we post here is surely just a subset of what the viewer can do for you. It is possible to change color, highlight, set transparency of components programmatically. Or to select components in the viewing space or the component tree. You also get events on the selected object and so you can react on user actions, by emphasizing components or adding additional capabilities.

One thing we noticed is that for Inventor files, you get access to all the file properties and also the display name of the browser, like Screw:1. This way, it’s possible to understand which component or instance is meant and for instance use the Design Automation API for executing custom code, as soon also Inventor will be supported.

All the functions like zoom, rotate, pan, etc. are accessible via the API, so it’s possible to think on scenarios where the showed model can be positioned programmatically, elements can be highlighted, hidden, colored, etc. in order to create a very custom experience. There are examples for creating virtual reality scenarios with google cardboard and much more.

Our impression is that the possibilities are endless, as the core API provide access to all relevant data and so it’s just about you creating the web page with according java script for doing the cool stuff.

In the short time, we could just find one limitation. We wanted to access the features of a component. Let’s say you have an Inventor part and want to figure out if there are holes in it. This information is not there in the viewer, as during the conversion into SVF, the geometry is translated into mashes and loses the underlying intelligence. It is possible to access the edges and so identify circular edges and the like, but with curved surfaces this becomes a mess. We don’t have yet a real customer scenario where this limitation might become an issue, so at the moment it’s just a note.

Our take away is that the API feels mature, very capable, very powerful, with lots of functions. There are many examples so that probably there is already a piece of code for the thing you like to do. We can see many opportunities where the viewer can be used in order to create an interactive experience for customers for better understanding the product, provide feedback, fix problems, and more.

Posted in Forge | Leave a comment

Cloudy times… part 2

forge

In our previous post we spoke about our findings on the Data Management API, during our week in Munich at the Autodesk Forge Accelerator event. Here we like to share our findings on the Design Automation API, formerly known as AutoCAD I/O.

The Design Automation API allows you to process your CAD files, with custom code, without the need of the according CAD application. At the moment just AutoCAD, but as the API has been renamed from AutoCAD I/O to Design Automation, it lets presume that further applications such as Inventor, Revit and probably more will be supported soon. As an example, if you like to transform your CAD file into PDF, or replace the title block, or even do extensive geometrical operations, you can do this with the Design Automation API. To put is simple, it’s an AutoCAD (actually AutoCAD Core Console) in the cloud, without a user interface.

In order to use the Design Automation API, you need obviously either an Autodesk developer account in order to create your own application, or you allow a 3rd party application to run under your Autodesk account. Anyway, the way the Design Automation API works is pretty simple.

You define a so called activity where you define the input arguments, for instance the DWG file that should be processed, and maybe also some additional arguments like an additional template or supporting files. Also you define the output arguments, like the resulting DWG file. Additionally, you define your AppPackage, which contains your source code, which could be in .Net, ARX or Lisp, that will do the actual operation.

Let’s suppose you have a ton of legacy AutoCAD files and you want to update them all by replacing the title block with a new one, maybe adjust the layers, line styles, etc. Your input arguments might be the given legacy DWG file and additional DWG or DWT/DWS as templates. Your output argument will be the processed DWG file, and your AppPackage will be the special code for doing the clean-up.

So, the activity is something you define once, and then, for each DWG file you will process a so called work item against the activity. The work item is the actual job that will be executed and based on the selected activity, it will require the input arguments and send back the result to the output argument.

Now, both the input and output arguments are URLs, therefore the files must be in the cloud, either on A360, DropBox, OneDrive, Amazon S3 (Simple Storage Service), etc., or you might open a connection to your files through your firewall. The files will be not saved by the Design Automation API. They will be just processed and neither stored or cached or the like. Therefore, the operation is quite secure.

We played a while with the Data Management API and wanted to see how stable and reliable it works. We tried to process an empty file, a corrupted DWG file (we deliberately removed some bytes from the DWG file), a picture renamed as DWG, a real DWG named with a different extension, a DWG with X-Refs but did not uploaded the X-Refs, tried timeouts and some other dirty things. The Design Automation API always came back with a solid and descriptive error message. Well done!!!

We wanted to know, in case where we need to process thousands and thousands of files, can we rely on this thing? Do we get back meaningful error-messages so that we can point the customer to the problematic files with a clear problem description? So far the answer is yes! This is impressive, as the same type of task on the desktop would be a mess. We would need to deal with crashed, dialogs, and the like, and processing more then one file at once, would mean to deal with parallelisation. With Design Automation, all this comes for free.

Btw, if you wonder whether just single DWG files or also DWG files with X-Refs and OLE-references can be processed, the answer is of course yes! There is either the possibility to send eTransfer DWG files to the Design Automation API, where all the referenced objects are either zipped or embedded, or the input arguments of the activity can also accept a sort of list of files.

In terms of processing lots of file, there is a so called quota that limits the amount of files you can process (5 per minute) and the execution time (5 min.). While the amount of files processed per minute can be increased on demand, the execution time cannot, which makes sense, as any complex operation should be doable in less than 5 min. To better understand the number of files per minute, this is related to a user. So, let’s take the example of before where we want to process thousands of files. If this happen under one user account, then 5/min. is the limit. But if we have, let’s say 10 users under which we let the app will run, then each user will have the 5/min. limitation, which results in processing 50 files per minute. If you want to know more about the quote, you can check out this link: https://developer.autodesk.com/en/docs/design-automation/v2/overview/quotas/

The documentation states something about “no rollover”, which means that if you process just 3 files per minute, then you cannot process the spared 2 files in the next minute, or if you purchased a certain amount of credits for processing a certain amount of files and you don’t consume the credits, those will be not refunded or taken over to the next month or payment period. The pricing is still under evaluation, so this might change over time, or maybe not, let’s see. However, if you have a file processing while reaching the quota limit, the file will be still processed.

Besides the limitation in amount and time, there are other security limitation, to ensure that the application runs smooth and not affect or infect other files. The Design Automation API runs as so called “sandboxed”. It means that each session is independent from another and it’s limited in his capabilities. For example, the code you’ve submitted via the AppPackage, cannot access the underlying operation system, or start additional applications, or access the internet. Therefore, your AppPackage must contain all you need for executing the code and cannot download additional stuff or run code outside of the Design Automation API. We did not had too much time, but of course we tried to jailbreak, without success. Actually, we had the pleasure to meet a very cool Autodesk guy, Tekno Tandean, which is responsible for security for the AutoCAD I/O and we had a very interesting conversation about how security works with the Design Automation API, and we can just say that Autodesk is taking this very seriously and so far they did a great job!!!

We also had the pleasure to meet Krithika Prabhu, senior product manager for AutoCAD I/O. She is very interested to enhance the AutoCAD I/O feature set. At the moment there is a standard AppPackage for creating a PDF. So, creating an activity for that is super simple. However, Krithika looks to further opportunities to provide simple functions for typical scenarios in order to make it very simple for us developer to get started with this API and get more done with less. I’m sure we are just scratching the surface and more cool stuff will come quite soon.

Our take away is that compared to the Data Management API, the Design Automation API looks way more mature, which is obvious as this API is around for more than 2 years. It’s well documented with a lot of sample, and it’s really fun working with. We have been able to get the API working in very short time. We can see already a lot of scenarios where this API can be used, either for mass file operations, creating simplified workflows, such as one click configurations, custom conversion in other formats, and much more. While we are investigating which products we can create for you, in case you have any need in the area of AutoCAD and his verticals, let’s have a chat, as this stuff is pretty cool!

Posted in Forge | 1 Comment

Cloudy times…

forge

This week we are attending the Forge Accelerator event in Munich, together with a lot of great Autodesk people and other Forge fans like us. Btw., Forge is the name under which Autodesk combines all the new cloud APIs. On https://developer.autodesk.com you can find all the available APIs and documentation and on https://forge.autodesk.com you can find more details about Forge.

This week we gave a closer look to the Data Management API, Design Automation API and Viewer API. Today I’ll give you our outcome from our findings about the Data Management API.

The Data Management API is responsible to deal with file objects, upload, download, set references and the like. Surely it will be soon extended with more capabilities.

From a user prospective there is a bit of confusion, as there is A360 Drive, which is a pure file collaboration service similar to DropBox, OneDrive, GoogleDrive, etc., then there is A360, and then there is the Bim Team and Fusion Team. Fore more clearance about the naming, give a look to https://blog.a360.autodesk.com/important-news-regarding-a360-team/.

A360 Drive has a desktop sync tool that syncs your files with it, however there is no API. A360 is accessible through the Forge Data Management API, and we will talk about it here, and now it has a desktop sync tool called Autodesk Drive which you could download through Fusion Lifecycle as seen in this video. Actually, A360, Fusion Team, BIM 360 Team are just the name of the consumer application. The technology underneath is all based on Forge API. So, at the moment it’s all in evolution, but Autodesk is moving very fast, so if you read this blog post in few months, it might be even no longer valid and the product, names and technologies are well streamlined.

Anyway, let’s talk about the Data Management API. It allows you to create hubs, projects, folders, items (files), versions, and references. So, you can create your custom upload tool for bringing all your data into the cloud, the way you want. As an example, we though how could we bring the files from Vault into the cloud. The sync tools (now called Autodesk Drive) will just upload the latest version of your files, but what if you want to take over the whole history? A simple approach could be to export the data from Vault via BCP, which generates a dump of all the data on local disk as XML files, process the BCP package and upload the files including the versions in A360. This way, you would not only get the latest version of your files, but also the whole history. This is pretty cool!

We played with the file version and gave a look whether the files must be uploaded in the right sequence, or if it’s possible to upload older versions to a later point in time. We could not make it work in the short time, but it seems to be possible – we will investigate further. We also played with references. We could recreate the references of an Inventor assembly and they were visible in the web user interface, but however, the references were not resolved, as the viewer was not able to display the assembly. It turned out that at the moment, just the Fusion360 references are supported, however, thanks to one of Autodesk fellows, we could give a sneak peek on an upcoming API extension where also Inventor references are supported. So, I guess that very soon the scenario we are looking for will be possible. Exciting!!!

Our take away is that the Data Management API already offers all the basic functions in order to create projects, upload files, versions, set references, etc. The current documentation is pretty good and there are already lots of examples, and from what we’ve seen, there is more to come – very soon.

Posted in Forge | 1 Comment

powerTree

2016-10-01_12-40-32Have you ever had the need to display, check and process structured data in a custom way? Let’s say you want to release a complete assembly, and want to perform a series of custom checks and actions on that structure. Please welcome powerTree!

I know, we need to find a new technology, so that we can define new product names. But meanwhile our love (and obsession) for PowerShell should be well known. And it will grow, now that PowerShell went open source and it’s available on many platforms, and with PowerShell version 5, debugging capabilities have been introduced.

Anyway, back to our problem. In recent projects, we faced the need to navigate through a structure and perform custom checks and operations. In our particular case, we had to check a quite large assembly, including all drawings, and test whether all the company’s business rules comply. We actually had to release the complete assembly, including drawings, but in case where for some reasons the drawings could not be released, the parent assemblies could also not be release. The default Vault dialog cannot perform this, so we had to invent something new.

Instead of doing something custom, we decided to create a little tool that give us the flexibility to do this kind of customization in a flexible and repeatable way. So, we developed a dialog that shows data as a structure, with configurable actions. All the logic is once again in a PowerShell script, so that we can define and tweak the logic as we need, without affecting the dialog’s source code. This way, the dialog becomes useful in many different situations.

Here is the sample implementation from our project. First, we had to collect the structure of the assembly, including all the drawing. Second, we had to check the compliancy of the components from bottom up. Third, we had to perform cleaning actions on the problematic components.

For collecting the components with drawings, we had to integrate the drawings in the structure, in order to ensure that the upper level assembly can only be OK if all the children and related drawings are OK too. As drawings are parents of the model, we switched the position of drawing and component, so that the drawing becomes the child and the model becomes the child of the drawing. OK, give a look to this picture, it makes it easier:

2016-10-01_08-49-16

In the example above, the assembly 10001.iam contains a child 10018.iam. However, the child 10018.iam, has one or more drawings. Therefore, we switched position between model and drawing, so that the 10018.idw becomes the child of 10001.iam. This way, if we now perform our check and the drawing 10018.idw has a problem, the top assembly 10001.iam cannot be released.

Another requirement was to be able to work in Vault, while the dialog is open. So, in case where the check finds errors, the user can fix the problems in Vault while keeping an eye on the list of problematic files.

The dialog can be extended with custom action, for instance, the “Go To Folder”, which makes it simple to jump to the problematic file in Vault and fix the problem. However, any sort of action can be configured. And the same is with Check and Process. Both buttons run a PowerShell function, therefore what should happen during the check and the process, is up to you.

As mentioned earlier, you can load into the dialog whatever you want, the way you want. It just has to be a structure (parent-child relation). The object types could be mixed. So, here for instance we load the folder structure and the according files in one dialog:

2016-10-01_09-20-47

Here another example, where we load the file structure, like for the first case, but also load the according items:

2016-10-01_12-31-49

Technically, it’s also possible to combine external data sources. So, for instance in the case above, the item might come from the ERP in order to check if every relevant component has a valid item.

So, the possibilities are endless. If you have the need to perform a custom check or custom operations over a custom data structure, then powerTree might be the cost effective way. Talking about costs. This is not a product you can download or purchase. Given the complexity of the topic, we provide access to this tool in the context of a project. So, if you have the need for powerTree, just get in touch with us via sales@coolorange.com and ask about powerTree. We will then discuss with you your requirements and ensure that powerTree is the right tool for you and define how the configuration will be done.

I hope I was able to give you an overview of what powerTree can do for you, and look forward for your feedback.

 

Posted in Data Standard | 1 Comment

Let’s talk…

2016-09-02_10-12-05

Over the past years, we had the pleasure to provide you information, tips, samples, code snippets, and first and foremost ideas on what can be done with Autodesk Data Management and other topics we are working with. This was and still is our primary goal – to light up your imagination on what the products and technologies you already have at your fingertips could do for you, with minimal or reasonable customization effort.

So far your response was great, in terms of page views, subscriptions to this blog and comments. Thanks you very much for following us and give us the energy to continue on this path!

We recognized that especially in reacting on comments, we could do better. Most comments are contributions, corrections or just appreciations. Thanks again for all this support! Some comments are requests for help, as some code does not work as expected, or there have been problems in the interpretation or implementation, or maybe the code is outdated. Reacting on those comments via the comment chat of the blog is a bit cumbersome. Therefore, we activated a forum (http://www.coolorange.com/forum), where topics like this can be better discussed.

Thus, from now on, if you have a comment, contribution, correction or just want to share your appreciation, please continue to use the comment section of this blog. That is the right place for such comments.
In case where you have issues with the posted idea or code sample, or you want to get in discussion with us and others, or want to leave a comment, suggestion idea that is out of topic and does not fit well to any existing blog post, then the forum is the better place. There, we can have a relaxed and more extensive conversation. The conversation will still remain public, and so accessible to everyone and therefore extend the content of the blog.

The forum can be reached via http://www.coolorange.com/forum. While the content is visible as a guest, you will have to sign up for contributing. I do encourage every one of you to take advantage of this additional communication platform.

When using the forum, please keep in mind that the blog posts are deliberately hold short and simple, with the purpose to show the way and create interest and excitement. Therefore, the code published on this blog, although it does work, may not be always immediately applicable in a productive environment. Also, we pursue new topics and ideas, and seldom we update old topics to new versions. However, the forum could be a good place for discussing such things.

Also, keep in mind that both the forum and the blog are free (no costs), although the effort for writing the posts and following up the forum is tangible. Therefore, in case you need a personal and timely assistance, please take advantage of our paid support. They can help you by reviewing your code, do a remote session and give a look on your machine, write a custom code snippet, and more. If you need such time of professional support, just reach out via email to support@coolorange.com and they will then provide more detailed information. However, use the forum to start the conversation.

Also, don’t forget to use the Autodesk Vault Customization forum for Data Standard related topics and other Vault customization issues. We are also quite active there. In case you’re not sure which forum to use for Data Standard questions, the Autodesk forum is always good as, there is already a large VDS community there, it’s visible to Autodesk, and questions and discussions around VDS are quite interesting for a larger audience.

Our new forum, really want to provide a better way to interact on coolOrange’s blog related topics.  We hope you will enjoy the new communication tool. We definitively will enjoy the conversation with you.

Posted in Uncategorized | Leave a comment

Debugging VDS

IMG_0708

Are you running Windows 10? Then I have a good news for you! You have already PowerShell 5 installed, and with that also the capability of debugging Vault Data Standard step by step. You are not running Windows 10? Then continue reading, as PowerShell 5 can be also installed on earlier Windows version.

With PowerShell 5, Microsoft introduced new command-lets that lets you connect to a running application, such as Vault, Inventor or AutoCAD and debug the PowerShell runspace hosted by such application. By doing so, the according PowerShell script will be opened in your PowerShell editor, so that you can execute the code line by line, see the current state of variables, and see how the code behaves. This is brilliant if you have a nasty problem with your code and like to figure it out.

Let’s see how this works. Let’s suppose you like to debug the Vault Data Standard New File dialog. Start the PowerShell ISE and let’s connect to the Vault application. For this purpose, there is a new command-let Enter-PsHostProcess -Name <ProcessName>, where the <ProcessName> is in this case Connectivity.VaultPro. You can enter Enter-PSHostProcess -Name Connectivity.VaultPro either in the command line or write the code in an empty script, select the line and execute it. You will notice that your command prompt changes from PS C:\>, to something like [Process:9436]: PS… . This means that you are now connected to the PowerShell engine of the hosting application. It is possible that the hosting application, in our case Vault, might work with several runspaces, which is the case for VDS for Vault. With Get-Runspace you can ask the list of available runspaces. You may get something like this:

2016-08-25_08-14-37

As you can see, here we have several runspaces. One of these is called RemoteHost – this is our PowerShell editor, so we can ignore it. The more interesting are the other runspaces. The question is, which runspace is the one we like to debug? In order to know, which one is the right one, we can use a trick. We bring up a message box from with the VDS code that tells us the runspace ID and also holds the code at that position, so that we have the time to start the debug mode with such runspace. For this purpose, I wrote a little function called ShowRunspaceID, that looks like this:

function ShowRunspaceID
{
            $id = [runspace]::DefaultRunspace.Id
            $app = [System.Diagnostics.Process]::GetCurrentProcess()
            [System.Windows.Forms.MessageBox]::Show("application: $($app.name)"+[Environment]::NewLine+"runspace ID: $id")
}

Just create a new file, for instance ShowRunspaceID.ps1, copy paste the code into the file and save the file nearby the Default.ps1 file. Restart Vault, in order that the new file is loaded by VDS. Now you can add the ShowRunSpaceID at any place in your code. As an example, you can add it at the beginning of the InitializeWindow function like this

2016-08-25_09-23-06

Create now a new file or folder in Vault via VDS and you should see the message box with the application name and the runspace ID. Now get back into your PowerShell Editor and enter the command Debug-Runspace -id <ID>, where <ID> is the ID provided by the message box. A short message should “welcome” you in the debug mode. Now, just close the message box, so that the code can continue. You will notice that the editor will open up the PowerShell script file and highlight the line that is currently executed. Now you can move on step by step either with F10 (step over) of F11 (step into). In order to exit the debug mode, you can either stop debugging in the PowerShell editor or just type detach in the console.

In summary, these are the two command-lets that you need in order to start debugging your VDS code:

Enter-PSHostProcess -Name Connectivity.VaultPro
Debug-Runspace -id 50

Just set the process name and the sunspace ID accordingly to the text in the message box.

We hope this new capability will help you to quicker identify issues in your code or help you understand the behavior of your VDS environment. Enjoy!

 

Posted in Data Standard, PowerShell | 1 Comment

VDS 2017

2016-08-19_08-02-28Data Standard 2017 have been released several weeks ago, and there have been some interesting improvements that brought us to update the VDS quick reference guide. VDS2017 brings a lot of little internal improvements, which will make customizing simpler. So far, there were quite differences between VDS for CAD and for Vault and such differences have been streamlined.

Aside the internal improvements, there is one new feature around the Save Copy As function in VDS for Inventor. In the early days, all save functions were mapped to the VDS dialog. So, regardless if you just saved, saved as, or saved copy as, in all three cases the VDS dialog showed up. Early customers immediately claimed the ability to perform a save copy as without VDS in order to save the files in different formats, and this is the behaviour as you know it today. Save and Save as are caught by the VDS dialog, and the Save Copy As is still the original Inventor dialog where you can save the file in a different format, where you want.
With VDS 2017 you have now a new Save Copy As function within the VDS ribbon. This allows you to save, actually to export, your file in some other formats directly into Vault. So, the VDS dialog still shows up and asks you to complete the form, but it also allows you to save the file in other formats and stores the file directly into Vault. So, the original Save Copy As is still the same and a new Save Copy As has been added. There is a very nice post  on the cadlinecomunity page with a nice video that gives a good overview. If you like to know more about this feature, leave a comment, and we can dig into it in a separate post.

Let’s get back to the internal changes in 2017. You will notice that the _Category and the _NumSchm property are now exposed, in VDS for CAD, as a custom property called VDS_category and VDS_NumSchm. This makes it simple to align the category chosen in CAD with the one assigned by Vault. Now you can map a user defined property in Vault with the VDS_category property of Inventor and AutoCAD and use the new property for defining a simple rule.

A very important change that will reduce a lot of customization, a are logic between category and numbering scheme. In case your numbering scheme is called the same way as the category, then it will be automatically preselected. Just try to create a category “drawing” with a numbering scheme “drawing” and a category “models” with a numbering scheme “models”. You will see that as soon you select drawing as category, the numbering scheme will be set accordingly, and as soon you switch to models, the numbering scheme will change too. You still can choose another numbering scheme, but in case they are both called the same way, then the pre-selection is automatic. That simple!

A new internal property called _OriginalFileName have been added. It’s useful when a copy operation is performed and you want to know what was the original file name. So, via $Prop[“_OriginalFileName”].Value, you can now get the file name of the original file.

The internal properties _CreateMode and _EditMode are now available in VDS for Vault as well. So, now it’s simple to know in which state the dialog is.

Also, a lot of logic that, so far was inside the XAML file, now have been moved to the Default.ps1. This way, logic is in PowerShell and graphics in XAML. It should be easier now to influence the behaviour. So, you’ll find now new PowerShell functions such as IsVisibleNumSchems, ShouldEnableFileName, ShouldEnableNumSchms, which defines the standard behaviour of the numbering scheme. So far, the logic was inside the XAML with triggers and the like. Now, the XAML points to these functions an here you can easily define the behaviour.

One big change in VDS 2017 is the change from the mymenu.mnu to the MenuDefinitions.xml. This change was a must since a while. By moving to a XML file, a lot of bugs with special characters and the like have been fixed and the logic should be now simpler. You can find more details on the Autodesk knowledge page.

Working with VDS and custom objects should now be much simpler. You get a VDS dialog template (CustomObject.xml, CreateCustomObject.ps1 and EditCustomObjects.ps1) which can be used with any custom object. Just add in the MenuDefinitions.xml the according menu item for your custom object. In the MenuDefinitions.xml you’ll find a sample for a custom object called CustomObject :-). So, just duplicate the CustomObject element in the CommandSite section and rename the Location property to the name of your custom object. Of course, it would be better to also duplicate the menu items in order to have a custom menu labelling.
For custom objects, a new automatic logic has been introduced. Just create a category with the same name as the custom object and you are done. Next time you create a custom object, that category is automatically set. In case you want to enforce such category, just set the combo-box to read-only or inactive in the XAML file and the user will not be able to change the category. In case you also have a numbering scheme with the same name, then also that is automatically assigned. Done!

There is one more thing! With VDS 2017 you have now the application and document object available, for Inventor and AutoCAD. So, within your PowerShell script, you can access via $Application and $Document to the native API. This allows you to interact more deeply with the hosting application. Of course, you will have to know the according application API, but if you do, then you have way more options now.

In my view, with this version of VDS, a lot of typical scenarios are now covered in the standard or can be accomplished in a very simple way.

Here is the updated Data Standard Quick Reference 2017 . Have fun with VDS2017! We already have…

 

Posted in Data Standard | 2 Comments