In one of our recent migrations, from another data management System (SmartTeam) to Vault 2013, we run into a strange behavior. Opening the Inventor assembly from Vault worked fine. An “open from Vault”, from within Inventor, downloaded only the assembly file into the local workspace, without the components, and Inventor came with the following message: Inventor allows you to work in assemblies with unresolved components. However updates that depend upon unresolved components will not solve correctly until all unresolved component(s) are resolved.

We use BCP for populating the Vault. So far we never experienced such behavior. It turned out that the redirection information were missing in the BCP file. For this situation we thought it is not necessary, as the file name remains the same, even tough the folder location changes. We expected that Inventor should resolve the references, which it would if Vault would download all the references. It was the first time we made a migration without redirection info. We thought we can take the short way – we were wrong!

In the Vault.XML file of the BCP package, you’ll find for each Association Element, an attribute called RefId (line 29,30 in the source below).

<?xml version="1.0" encoding="utf-8"?>
<Vault sourceId="317c3bd0-f91e-4d5e-82b2-e79c4530ebca" xmlns="http://schemas.autodesk.com/pseb/dm/DataImport/2012-01-12">
<Statistics TotalFiles="3" TotalFolders="3" />
<Security></Security>
<Behaviors></Behaviors>
<Root>
<Folder Name="Designs" Category="Folder">
<Folder Name="test" Category="Folder">
<File Name="Part2.ipt" Classification="None" Category="Base">
<Revision>
<Iteration Comment="" Modified="2013-05-06T06:27:27.213Z" LocalPath=".\redir-files-0\1-1.ipt" Id="_412" cspid="4">
</Iteration>
</Revision>
</File>
<File Name="Part1.ipt" Classification="None" Category="Base">
<Revision>
<Iteration Comment="" Modified="2013-05-06T06:27:27.2Z" LocalPath=".\redir-files-0\3-1.ipt" Id="_416" cspid="4">
</Iteration>
</Revision>
</File>
<File Name="Assembly1.iam" Classification="None" Category="Base">
<Revision>
<Iteration Comment="" Modified="2013-05-06T06:29:12.57Z" LocalPath=".\redir-files-0\5-1.iam" Id="_420" cspid="4">
<Association ChildId="_412" Source="INVENTOR" RefId="2" NeedsResolution="false" Type="Dependency" />
<Association ChildId="_416" Source="INVENTOR" RefId="1" NeedsResolution="false" Type="Dependency"  />
</Iteration>
</Revision>
</File>
</Folder>
</Folder>
</Root>
</Vault>

It describes the referenceInfo index of the according internal Inventor reference. In this post we already spoke about this topic. As long the folder structure of the Inventor files does not change, and also the file name remains the same, a redirection is not needed, of course. If the folder structure changes, even though the filename remains the same, it seems that redirection is needed. We made some tests, and could not find a difference in the association object within Vault, however with redirection info, everything is fine. Here the tests we made:

1. We created an empty Vault. Created an assembly with 2 parts and uploaded that assembly via Inventor. Via BCP we exported this package. We deleted the files in Vault and reimported the package. Both opening from Vault and Inventor worked.
2. We manipulated the Vault.XML file and removed the RefId. We deleted the files in Vault and reimported the package. Both opening from Vault and Inventor worked.
3. We manipulated again the VaultXML file and renamed the folder in which the files are located. We deleted the files in Vault and reimported the package. Bingo! Opening the assembly from Vault works. Opening the assembly from Inventor via “open from Vault” brings the message Inventor allows to work… The work folder contains only the assembly – no components!!
4. As soon we add again the RefId into the BCP package, everything works fine, even tough the NeedsResolution attribute is set to false.

During these steps, we checked the FileAssoc object in Vault and it looked almost the same, except for the RefId attribute that was either NULL if omitted in the BCP package, or set to the according value. The other attributes remained the same.

Bottom line is, it takes a bit more effort to include the RefId in the BCP package, but it’s worth the investment!!

## Invoke Win32 API via PowerShell

.NET provides pretty good support to for calling Win32 APIs and there’s a free Visual Studio Add-In from redgate available which helps your dealing with the Win32 API: http://www.red-gate.com/products/dotnet-development/pinvoke/

But what to do if you want to call Win32 APIs or your own .NET functionality in PowerShell?

As always there is more than one way: You can write your .NET code in Visual Studio, compile it to a class library and make it available in PowerShell using Reflection:

[System.Reflection.Assembly]::LoadFrom("path\to\your.dll")

Another – cool – way is to compile your code directly to the memory without generating a dll in the file system and consume it form within PowerShell!

The following code sample shows how to accomplish this by creating a compiler function, embed C# source code as a string, compile the code and invoke the C# function in PowerShell:

##################################################################
# Compiler
##################################################################
function Compile-Csharp ([string] $code,$FrameworkVersion="v4.0.30319")
{
$provider = New-Object Microsoft.CSharp.CSharpCodeProvider$framework = [System.IO.Path]::Combine($env:windir, "Microsoft.NET\Framework\$FrameWorkVersion")
$references = New-Object System.Collections.ArrayList$references.AddRange( @("${framework}\System.dll","${framework}\System.Core.dll"))
$parameters = New-Object System.CodeDom.Compiler.CompilerParameters$parameters.GenerateInMemory = $true$parameters.GenerateExecutable = $false$parameters.ReferencedAssemblies.AddRange($references)$result = $provider.CompileAssemblyFromSource($parameters, $code) if ($result.Errors.Count)
{
$codeLines =$code.Split("n");
foreach ($ce in$result.Errors)
{
write-host "Error: $($codeLines[$($ce.Line - 1)])"
$ce | out-default } Throw "Compilation of C# code failed" } } ################################################################## # C# Code ##################################################################$code = @'
using System;
using System.Runtime.InteropServices;
using System.ComponentModel;

namespace CompileTest
{
public class Sound
{
[DllImport("User32.dll", SetLastError = true)]
static extern Boolean MessageBeep(UInt32 beepType);

public static void Beep(BeepTypes type)
{
if (!MessageBeep((UInt32)type))
{
Int32 err = Marshal.GetLastWin32Error();
throw new Win32Exception(err);
}
}
}

public enum BeepTypes
{
Simple = -1,
Ok = 0x00000000,
IconHand = 0x00000010,
IconQuestion = 0x00000020,
IconExclamation = 0x00000030,
IconAsterisk = 0x00000040
}
}
'@

##################################################################
# Compile the code and access the .NET object within PowerShell
##################################################################
Compile-Csharp $code [CompileTest.Sound]::Beep([CompileTest.BeepTypes]::IconAsterisk)  If you want to use other references than “System” in your C# code, make sure to introduce them to the compiler by adding the dll to the$references ArrayList. Also make sure to pass the appropriate framework version to the Compile-Csharp function. A list of available framework versions and the System*.dlls can be found here: C:\Windows\Microsoft.NET\Framework or C:\Windows\Microsoft.NET\Framework64

Over all this is a quite simple and very cool way to provide .NET functionality within PowerShell. Happy coding!

By the way: Make sure your speakers are turned on when running this example since it leads to a beep sound on your machine

## Let the system make your setup

It’s already happened to me several times that I developed a nice tool or program, and later I startet to create a small setup for it. I use Wix 3,5 for this, and create a Setup.wxs where I start to add my files in the right folder. The Setup builds perfectly now.

Later, when I have to do some changes in my code, I have mabe to add new third-party references to one of my projects. They have to be added to the Setup.wxs file too! When this scenario happens serveral times (and it will happen several times! ), the day will come where you will forget your setup. When debugging on your machine, everything will be Ok, but the customer who will install your setup and will immediately see an error like this:

Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'file://C:\....\log4net.dll' or one of its dependencies.
The system cannot find the file specified

To avoid this, and to avoid the time of bringing the setup to the same level as your projects, you can use the harvesting technique to let the system make your setup code.

The important thing for this is, that you have to keep your output directory as clean as possible! The way I wantet to have my setup is that all the files and subfolders in my “bin\Release” directory will be in the installation directory after installing the setup. This has the advantage that I can see immediately how the setup output will look like.

1) Create a global “bin\Release” directory that is the output for all your projects:

Change the output-directory for all your projects to a global “bin\Release” direcotry. I would suggest to not build .pdb files in Release mode for your assemblies (Set project-settings/Build/Advanced/Debug-Info to None). You can try to run your application now from this folder. If dlls are missing, you will see it quickly.

2) Create a Wix project and create all the directories you want in the setup-file

Create a WixProject with a Setup.wxs file where you define your install-directory as common. It could look like this:

<Directory Id="TARGETDIR" Name="SourceDir">
<Directory Id="ProgramFilesFolder" Name="PFiles">
<Directory Id="coolOrange" Name="coolOrange">
<Directory Id="INSTALLDIR" Name="myApplication">
...

This is now very easy. Go to the project-settings of your wix-project to “Build”. Define a preprocessor-variable for the directory you want to harvest (its the global bin\Release directory):

HarvestPath=..\bin\Release

Now unload the Wix-project, and click “Edit …wixproj”. Now go to the end of your file and insert this rows:

<Target Name="BeforeBuild">
<HeatDirectory
Directory="..\bin\Release"
PreprocessorVariable="var.HarvestPath"
OutputFile="HeatGeneratedFileList.wxs"
ComponentGroupName="HeatGenerated"
AutogenerateGuids="true"
DirectoryRefId="INSTALLDIR"
ToolPath="$(WixToolPath)" SuppressFragments="true" SuppressRegistry="true" SuppressRootDirectory="true" /> </Target> </Project> This will call the heat.exe for you. Heat will look in your directory and will create a .wxs file with all the files and subfolders in it. Now reload the wix-project and build it. The build will fail, but heat will be called and will create a file called: HeatGeneratedFileList.wxs Now include this wix-file into your wix-project and open your Setup.wxs file. Go to the section “feature” and insert the name of the component that you passed to the heat. It should look like this: <Feature Id="Complete" Level="1"> <ComponentGroupRef Id="HeatGenerated"/> ... </Feature> Now, because you have added the component to the setup, your build will run and create a nice Setup for your. Dont care about renaming assemblies, or adding or removing some references in your projects. The setup will take the whole Release directory and harvest it for you. The installation directory will look exactly like your project output folder. The system will do the work for you now! Posted in Visual Studio | | Leave a comment ## Handle SoapException 300 „BadAuthenticationToken“ in Vault Extensions Did you or your customers ever run into a SoapException 300 when using a custom command of a Vault Add-In you have developed? Usually, you should not see this exception unless you call an “iisreset” on the server. By default the authentication ticket that is used for initializing the WebServiceManager or IExplorerUtils becomes invalid after 29 days. That’s a rather long vacation and not everybody keeps Vault Explorer running for such a long time. But probably on the customer site the “Recycling Regular Time Interval” is set to a much smaller value. For more details see: http://crackingthevault.typepad.com/crackingthevault/2009/04/badauthenticationtoken-300.html Anyway, a custom command should behave correctly when this exception occurs. First of all, can we probably avoid this exception? Typically, we’re using the WebServiceManager class for web service calls or sometimes the IExplorerUtil interface for other useful functions. This is how the instances of these classes can be initialized: private WebServiceManager _webSvcMgr; private IExplorerUtil _explorerUtil; void IExtension.OnLogOn(IApplication application) { InitializeWebServiceManager(application); } void InitializeWebServiceManager(IApplication application) { if (_webSvcMgr != null) _webSvcMgr.Dispose(); // Use the IApplication object to create IExplorerUtil object. _explorerUtil = ExplorerLoader.GetExplorerUtil(application); // Use the IApplication object to create a credentials object. var cred = new UserIdTicketCredentials( application.VaultContext.RemoteBaseUrl.ToString(), application.VaultContext.VaultName, application.VaultContext.UserId, application.VaultContext.Ticket); // Use the credentials to create a new WebServiceManager object. _webSvcMgr = new WebServiceManager(cred); }  The WebServiceManager allows automatic re-signin which takes care of the SoapException 300. But only if the credentials provided support re-signin. UserIdTicketCredentials does not. IExplorerUtil also does not support re-signin. So, initializing the WebServiceManager only once in the function OnLogOn() is probably not the best idea. When the custom command is invoked a CommandItemEventArgs object is passed in. This provides the IApplication object that can be used to initialize the WebServiceManager every time a user runs your custom command: void cOCmdItemExecute(object sender, CommandItemEventArgs e) { try { IApplication application = e.Context.Application; InitializeWebServiceManager(application); …  Unfortunately, this doesn’t avoid SoapException 300 for all cases. If the user calls the custom command (e.g. from a context menu) after the timeout on an already selected item in a list view, the IApplication object that gets passed in does not have a valid authentication ticket. That means we cannot avoid the exception and there is also no way to do the re-signin within our custom command. Therefore, we need to catch the exception and tell the user to perform the operation again. To make sure it runs the next time a refresh is required by setting the ForceRefresh property of the context to true. This causes the Vault Explorer to re-signin and calling the OnLogOn() function for each Vault Add-In again: void cOCmdItemExecute(object sender, CommandItemEventArgs e) { try { IApplication application = e.Context.Application; InitializeWebServiceManager(application); Folder rootFolder = _webSvcMgr.DocumentService.GetFolderRoot(); } catch (SoapException se) { if (se.Detail["sl:sldetail"]["sl:errorcode"].InnerText.Trim() == "300") { MessageBox.Show("Please refresh and try performing the operation again!"); e.Context.ForceRefresh = true; } } }  Posted in Vault API | Tagged , | 1 Comment ## Eventhandler and Replication – Best practice Hey, some time ago I faced some problems with the Vault-API in the eventhandler. As always, the problems were discovered only on the customer side. After some analysis we found the reason for the issuese: The Problems occured only in a replicated environment. And here is my eventhandler class for the Vault 2013 Server: [assembly: ApiVersion("5.0")] [assembly: ExtensionId("some guid here")] public class Eventhandler : IWebServiceExtension { internal EventHandler<CheckinFileCommandEventArgs> CheckInEventhandl; internal EventHandler<AddFileCommandEventArgs> AddEventhandl; internal EventHandler<DeleteFileCommandEventArgs> DelEventhandl; internal EventHandler<MoveFileCommandEventArgs> MoveEventhandl; private WebServiceManager _webSvcMgr; public void OnLoad() { AddEventhandlers(); } ... void AddFileEventPost(object sender, AddFileCommandEventArgs e) { … } void CheckinFileEventPost(object sender, CheckinFileCommandEventArgs e) { … } void MoveFileEventPost(object sender, MoveFileCommandEventArgs e) { … } void DelFileEventPre(object sender, long fMid, DeleteFileCommandEventArgs e) { … } void DelFilesEventPre(object sender, DeleteFileCommandEventArgs e) { foreach (var fileMasterId in e.FileMasterIds) { DelFileEventPre(sender,fileMasterId,e); } } … private void AddEventhandlers() { CheckInEventhandl = CheckinFileEventPost; AddEventhandl = AddFileEventPost; DelEventhandl = DelFilesEventPre; MoveEventhandl = MoveFileEventPost; DocumentService.CheckinFileEvents.Post += CheckInEventhandl; DocumentService.AddFileEvents.Post += AddEventhandl; DocumentService.DeleteFileEvents.Pre += DelEventhandl; DocumentService.MoveFileEvents.Post += MoveEventhandl; } private void RemoveEventhandlers() { DocumentService.CheckinFileEvents.Post -= CheckInEventhandl; DocumentService.AddFileEvents.Post -= AddEventhandl; DocumentService.DeleteFileEvents.Pre -= DelEventhandl; DocumentService.MoveFileEvents.Post -= MoveEventhandl; } } ( And I have installed the latest version of the sdk-assemblies: v 17.0.62.0 ) Problem 1) When I work with my VaultExplorer-Client on the replicated environment (there is no difference if I work on Publisher or on Subscriber) I realized that some of the events becomes fired twice! This is the result of my analysis: - AddFileEvent (POST): becomes fired 2 times - CheckinFileEvent (POST): becomes fired 1 time - MoveFileEvent (POST): becomes fired 2 times - DeleteFileEvent (PRE): becomes fired 1 time I analyzed, that both events have always different Event-Id (GUID) This problem is reproducible and happens only in a replicated environment! Problem 2) Scenario: PUBSLISHER adds a file and changes the ownership to him. SUBSCRIBER changes the ownership of a folder to the Subscriber. PUBLISHER moves his file to the Subscriber In This scenario the move-event becomes blocked with an Error-message, but the movefile-event was fired anyway! This means we have a problem, because the file doesn’t become moved, but the event was fired! Solutions: Problem 1) In fact, here we are talking about an API issue, where we a workaround: You have to overload the pre-event AND the post-event. Then you have to remember that the pre-event was called allready. So you can check later if you are running in the first Post-event, and the second post-event can be terminated. Do like this: 1) create a internal variable where you are chaching the pre events: public class Eventhandler : IWebServiceExtension { internal Dictionary<long,FileInfos> DeleteFileCache; ... 2) overload also the Pre-event and create a cache functionality for it: void DelFilesEventPre(object sender, DeleteFileCommandEventArgs e) { foreach (var fileMasterId in e.FileMasterIds) { DelFileEventCache(sender, fileMasterId, e); } } void DelFileEventCache(object sender, long fMid, DeleteFileCommandEventArgs e) { SetWebSrvMngr(sender); if (!DeleteFileCache.ContainsKey(fMid)) { DeleteFileCache.Add(fMid, FileInfo); } else { Logger.Log.WarnFormat("A file with this MasterId was allready found in the cache (Mid: {0} )! Stopping ...", fMid); } } 3) And now create the post functionality and check if the pre-function was fired allready. If we are in the second post event, just return: void DelFilesEventPost(object sender, DeleteFileCommandEventArgs e) { foreach (var fileMasterId in e.FileMasterIds) { DelFileEventPost(sender,fileMasterId,e); } } void DelFileEventPost(object sender, long fMid, DeleteFileCommandEventArgs e) { if(!DeleteFileCache.ContainsKey(fMid)) { Logger.Log.ErrorFormat("Failed to find the file in the cache (Mid: {0} )! Stopping ...",fMid); return; } DeleteFileCache.Remove(fMid); //do your stuff here ... } Problem 2) Always check the Status property on the EventArgs during a Post. See here: void CheckinFileEventPost(object sender, CheckinFileCommandEventArgs e) { if(!CheckEventStatus(sender, e)) return; ... } private bool CheckEventStatus(object sender, WebServiceCommandEventArgs eventArgs) { if(eventArgs.Status!=EventStatus.SUCCESS) { return false; } return true; } I hope this was helpful for you as a Vault-Eventhandler developer Posted in Vault API, Visual Studio | | Leave a comment ## Vault webservice trace In many applications like MS Word or the like, you have the ability to record macros. This simplifies a lot the development of own code, as you can see the code generated by the application, and you can derive from that the code you like to write. Wouldn’t it be nice to trace the Vault API calls, in order to understand which functions and which arguments are required, so that you can then write your own code without starting from scratch? Well, Vault does not offer macro recording, but you can activate the webservice trace, which records all the API calls from the client to the server. This article explains you how to activate the trace and how to filter the huge amount of data via few lines of PowerShell. The trace can be activated on the client side or server side. The difference is obvious: the client side trace will only trace the calls from the specific Vault client, while the server trace will trace calls from every client. The server trace might be interesting if you like to trace non Vault clients, like CAD applications or custom applications. However, the log file will become big and it slows down the system. In our case we activate the client trace. The file we will modify is called Connectivity.VaultPro.exe.config, while VaultPro changes based on your Vault flavor. The file is located usually under c:\Program Files\Autodesk\Vault <Flavor> <Version>\Explorer. A simple notepad will be fine for editing, but as this file is an XML file, a XML Notepad would be better. Look for this section  <microsoft.web.services3> <messaging> <maxMessageLength value="51200"></maxMessageLength> <mtom clientMode="On" /> </messaging> <security> <!-- Specifies the time buffer used by WSE to determine when a SOAP message is valid. <em id="__mceDel"> set to the max of 24hr in seconds --> <timeToleranceInSeconds value="86400" /> </security> <diagnostics> <!-- this only works if AutodeskVault user has write permission to the "Web\Services" directory. After an install, the AutodeskVault user only has read access. --> <trace enabled="true" input="c:\temp\traceServerIn.log" output="c:\temp\traceServerOut.log" /> </diagnostics> </microsoft.web.services3>  The interesting section is <trace … />. This section is probably missing in your file, so just add it as described above. Set the enable attribute to true and define at the input and output attributes the location and name for the log file. Save the file and restart your Vault client. You will then see such files been created and growing. Click around your Vault and you will trace the API calls. Now, the output file is the file that describes the API calls that goes out from Vault with according arguments, while the server response with according objects are in the input file. If you open the output file, you will notice a lot of inputMessage elements and each of them has processingStep elements. Each last processingStep contains a soap:Envelope and a soap:Body. The next child element is the API call with the arguments as children. Every inputMessage has a messageId which describes the unique ID of this message. In the input file, you’ll find a similar structure and the response has in the soap:Headerwsa:MessageID that will have the same message ID as the command that has been sent. So, via this ID you can see the answer to the according call. The following PowerShell script reads the input file, searches for each processingStep that has the attribute description=’Processed message’ and picks the element child of soap:Body. As we work here with XML file, we use a quite powerful technique called XPath. It allows us to filter the XML for specific element with certain criteria. A good introduction to XPath can be found here. For each API call we find, we take the message ID and look for the according response in the input file. The PowerShell script prints out all the API calls with according arguments and the response from the server. [xml]$xmlOut = Get-Content -LiteralPath c:\Temp\traceServerOut.log
[xml]$xmlIn = Get-Content -LiteralPath c:\Temp\traceServerIn.log$bodies = Select-Xml -XPath "//processingStep[@description='Processed message']//soap:Body/*" -Xml $xmlOut -Namespace @{"soap" = "http://schemas.xmlsoap.org/soap/envelope/"} foreach ($body in $bodies) {$messageId = $body.Node.ParentNode.ParentNode.ParentNode.ParentNode.messageId$body.Node.Name
foreach ($child in$body.Node.ChildNodes) { " "+$child.Name+"="+$child.InnerText }
$response = Select-Xml -XPath "//soap:Header[wsa:RelatesTo='$messageId']/../soap:Body/*" -Xml $xmlIn -Namespace @{"soap" = "http://schemas.xmlsoap.org/soap/envelope/";"wsa" = "http://schemas.xmlsoap.org/ws/2004/08/addressing"} " Response:"+$response.Node.Name
foreach ($child in$response.Node.FirstChild.ChildNodes) { " "+$child.Name+"="+$child.InnerText }
}
`

Some functions have simple arguments like a string or number, but other have arrays or objects, which in the XML notation they would generate a structure with several levels. The script above just reads the first level of the structure. So you may see that arguments for some functions are not complete. But the script is short enough and still powerful enough to give you a fast view of the functions called.

Just adapt the path to your trace files and let the script run. You will see in the output window the summary of the API calls, those arguments and server response. With this information you can now create your code, taking advantage of the calls made by Vault. For instance, if you like to figure out how to find the files associated to an item, you could start the trace, navigate to your item, and see the item tab with the linked files. In order to show this information, Vault called several commands, which are in the trace and you can pick the commands of interest.

I hope this helps you when you create your next Vault extension.

Posted in PowerShell, Vault API | 2 Comments

## Approaching a myView project

myView is an extension for Inventor, AutoCAD and Vault. It’s a customizable dialog that helps users capturing relevant information based on their company rules. Here a short video. It is a quite popular add-on for companies that likes to bring more structure in their Vault. In order to simplify the implementation of myView, we created a specification template that might help you in your future myView projects. The template can be downloaded here. The template is split in 4 parts:

1. The configuration: before you start with designing the UI, the set of required fields must be defined. Thus, for all information the user will enter, view or edit an according myView field must be defined. By completing the according table in the template, you will describe the field type, which fields are obligatory, the according label, and so on.
2. The User Interface: I know, I know. Usually you would start by drawing some lines, place some controls around, as this gives you the feeling of seeing a progress. Please complete the configuration part and take time to specify the user interface first. Let’s define how the fields belong together. In order to structure the myView dialog, the fields will be grouped in logical boxes, into group-boxes. Additionally, fields like drop-down boxes needs a data source that defines which values shall be shown. There might be some fields that depend on each other. For instance a computed field that generates a value based on other fields’ values, or drop-down boxes that shows different values based on the value selected in a dependent field. Once all this information is defined, it becomes very easy to design the UI.
3. The Logic: In almost every project, the myView shall enable and disable certain fields for editing, depending on specific rules. In simple cases you may want to have different configuration for the different file types. In other cases the fields available to editing shall be dynamic to some user input. So, take the time to define first the behavior of the myView. Additionally you will also define how the files name shall be computed and the location where the file shall be saved.
4. The Specialties: The 3 items above, covers those topics that are common in every project. But as every project has some specific requirements, this is the section where this will be defined. Here you may have some special functions, or special behavior. The important thing is to not mix these topics with the 3 items of above.

Having a clear understanding of the configuration, the user interface and the logic, will ensure that the fundament of myView is solid. Additionally, by implementing the first 3 items, you’ll have in short time a running myView. This allows you to gather early on customer feedback, before you go too far into details. So, myView projects could be quite fun and successful, if taken from the right prospective. I hope this post and the related template will help you to successfully implement your myView. And as usual, in any case, we are glad to help.