Thursday, April 18, 2019

#MSDYN365BC: Building a Development Environment for Microsoft Dynamics GP ISVs Part 3/3

In Part 2 of this series, we covered the full installation of Docker Desktop, used to run the Dynamics 365 Business Central containers. We also saw how to use PowerShell to enable both the Hyper-V and Containers features on Windows 10.

This article will focus on the installation and troubleshooting of the Dynamics 365 Business Central containers and will provide step by step instructions on how to accomplish this. Remember, there are quite a bit of resources out there, so here they are:

Get started with the Container Sandbox Development Environment
Running a Container-Based Development Environment

But the goal of this series is to help Microsoft Dynamics GP ISVs draw similarities and contrasts with their multi-developer Microsoft Dexterity development environments.

Now that Docker is been installed, we can effectively proceed to lay down the BC containers. This will create a full virtualized environment with all the BC components needed for development purposes. This equates to having a full environment with Microsoft Dynamics GP, Web Client, IIS, and SQL Server in place for developers to code against.

Business Central Containers Installation and Troubleshooting

1. To begin the installation, we must install the NavContainerHelper PowerShell module from the PowerShell Gallery, which contains a number of PowerShell functions, which helps running and interacting with the BC containers.

See NavContainerHelper from Freddy Kristiansen for additional information.

Install-Module NavContainerHelper -force
In the process of installing the NavContainerHelper module, you will be asked to add the latest NuGet provider to be able to retrieve any published packages. After the installation of the NuGet provider, I went to import the NavContainerHelper module and ran into the following error, advising me that running scripts was disabled on the system I was attempting to install on.

By running the Get-ExecutionPolicy command, I was able to identify that all PowerShell execution policies on my machine were set to Undefined, which in turn prevents unsigned scripts from being executed.

Since I was installing this on my local machine, I simply wanted to bypass any restrictions within the current user scope.

2. With the installation of the NuGet provider and the changes to the script execution policies in place, it was time to call Import-Module to add the NavContainerHelper module.

Importing the module is a quick step.

3. Finally, it's time to create the BC containers. This is done by calling the New-NavContainer function (from the NavContainerHelper module). You will be prompted to create a user name and password to access the container and BC once installed. Here's the full call:

New-NavContainer -accept_eula -containerName "Demo-bc" -accept_outdated -imageName "microsoft/bcsandbox:us" -auth NavUserPassword -includeCSide -UpdateHosts -doNotExportObjectsToText

4. The container files are downloaded onto disk and are extracted.

5. Once all the files are extracted, the container is initialized by Docker. If all goes well, you should see a message letting you know that the container was successfully created.

Container created successfully
If you close the PowerShell window, you will notice a new set of icons on your desktop that will allow you to load BC running on the container, as follows:

  • Demo-bc Web Client: shortcut to the BC web client application
  • Demo-bc Command Prompt: access to the container command prompt
  • Demo-bc PowerShell: access to the PowerShell prompt running on the container
  • Demo-bc Windows Client: launches the Microsoft Dynamics NAV on-premises client
  • Demo-bc WinClient Debugger*
  • Demo-bc CSIDE: launches the CSIDE development environment for BC.

Desktop after a successful BC container deployment
Double-click on the Demo-bc Web Client icon to test the container deployment.

With the installation of Docker and BC containers, we have completed all the supporting environment setup. Be sure to play around with the new options, in particular, with both BC web client and Windows client components. It is important you begin to gain an understanding of the functional aspects of the application, before you embark in developing for this platform - nothing different than what you already did for Dynamics GP.

We are not quite done here, but since I am supposed to be a rational human being and respect the number of parts I chose for this series, I will start a new series showing how to add Visual Studio Code along with selecting and connecting to a source control repository, to close out this topic, so bear with me.

Until next post!

Mariano Gomez, MVP

Friday, April 12, 2019

#MSDYN365BC: Building a Development Environment for Microsoft Dynamics GP ISVs Part 2/3

In Part 1 of this series, I outlined the principles and detailed the reasoning behind why we chose to build our Microsoft Dynamics 365 Business Central development environment using Windows Docker containers.

In the Dynamics GP world, we are not quite used to containers, so let me start with the definition, straight from the horse's mouth (so to speak). According to the wizards over at Docker, "A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings".

The first thing to highlight from the definition is, "standard unit of software". In fact, that's key to this whole thing! Standardization ensures that every developer in the organization is building and testing code against the same reliable environment. In the Dynamics GP world, although we have the ability to build stable reliable development environments, consistency is not always something that we can achieve easily, unless we are using desktop virtualization, which intrinsically  poses its own challenges.

But this article is about installing Docker. So let's get to it.

Installing Docker

Windows 10 Anniversary Update (build 1607) saw the introduction of Windows containers, a feature that allows you to install and deploy Docker and other container virtualization technologies. Follow these steps to complete a successful installation of Docker.

NOTE: from now on, most of the work will be done in PowerShell.

Enable Windows Containers feature

1. Open Window PowerShell (not PowerShell ISE) with elevated permissions. Click on Start and type "PowerShell". Choose "Run as Administrator" to continue.

2. You must first enable Hyper-V. In PowerShell type the following command:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

Enable Hyper-V via PowerShell

NOTE: If you previously installed Hyper-V, you must first uninstall it then reinstall it using the PowerShell. This means you need to backup all your Hyper-V images prior to completing this command.

It is recommended to reboot your machine after this operation to allow all components to be properly registered.

3. Upon rebooting your machine, open PowerShell once more, with elevated permissions, and type the following command:

Enable-WindowsOptionalFeature -Online -FeatureName Containers -All

Enable Windows Containers via PowerShell
NOTE: If you previously installed Windows Containers, you must first uninstall it then reinstall it using the PowerShell. This means you need to backup all your container images prior to completing this command.

It is recommended to reboot your machine after this operation to allow all components to be properly registered.

Download and Install Docker

1. To complete Docker installation, you must first go to then choose the Products | Docker Desktop.

Products | Docker Desktop

You must create an account with some basic info, if you don't have one already: user name, password, and email, in order to download Docker. Next, confirm your email address by clicking on a link you will receive in the inbox corresponding to the email associated with the Docker account you created. You now log into Docker Hub and download the Docker Desktop for Windows engine. By default, this would be placed in your download folder, unless your browser has been configured differently.

2. Once you've got through the account validation and download process, proceed to run the Docker installer (installer.exe). Upon launching the installer, the process begins with downloading a number of installation packages.

2. During the configuration screen, you will be prompted to select whether you want to run Windows containers vs Linux containers. The choice here should be obvious, but you have the ability to change this after the fact.

4. Upon clicking OK, the installer begins to unpack all files accordingly.

5. If everything goes as expected, you will be asked to sign out and sign back into Windows.

6. After signing into Windows, the service will initiate and you will be presented with a window to enter your Docker account information. This, according to Docker, is to track application usage.

7. I don't know if this is a bug in the installer, but even after selecting to run Windows containers in step 2, I had to manually right-click on the Docker task bar item and select to switch to Windows containers.

It is always good to test Docker to ensure everything is functioning as expected. For this, we can turn to PowerShell once more and execute any of the 2 following commands:

docker --version
docker info

Docker version and information commands
These steps conclude the installation of Docker. In the next installment, we will deploy the actual Microsoft Dynamics 365 Business Central containers and prepare you for what's next.

Hope you find this useful.

Until next post!

Mariano Gomez, MVP

Wednesday, April 10, 2019

#MSDYN365BC: Building a Development Environment for Microsoft Dynamics GP ISVs Part 1/3

This is my first foray into the world of Microsoft Dynamics 365 Business Central (BC) development and this series of articles is meant to help Microsoft Dynamics GP ISVs understand the process of building a BC development environment, identify similarities with a Dynamics GP development environment, and fully utilize your accumulated experience. Yes, there's tons of literature out there, but none have the perspective of a GP ISV, so there's that 😋

It is worth noting that I am a 20+ years Microsoft Dexterity developer and, as we say, "we do things a little different around here" in the Dex world, but I am very excited to be initiating this new chapter in my career.

As you all know, at this point in my life I manage the Software Engineering team at Mekorma and as a long time Microsoft Dexterity developer they were a few things I knew I wanted out of this new development environment:


From an engineering perspective, this means that each developer needs the ability to author and unit test code locally, while ensuring changes are managed centrally in our Azure DevOps source code repository. This is how we've always done it in the GP world and I did not want my engineering and development team to have to learn new paradigms or think differently about the actual process.

I know, I know... a lot of you prefer to have development images in the cloud and have developers connect to those images and develop from there. This is a personal preference and you need to evaluate what works for your development team. In our particular case, we don't want to be reliant on internet connectivity to have a developer do their work. Some of the best pieces of code have been created when folks are sitting at the beach sipping pina coladas, or while in the mountains in a cabin, so there's that.
Ease of Deployment

One of the things I truly dislike about the process of building Microsoft Dynamics GP integrating applications, after all these years, is the need to have several versions of Dynamics GP and Dexterity installed on each developer's machine, depending on the release of GP being targeted. If your company is anything like ours, as of this writing, we support anything from GP 2013 R2 to GP 2018 R2 and everything in between. That's a lot of software!

Having all these instances of GP involves a lot of application installation, service packs, etc., not to mention SQL Server and a variety of versions and builds of your own product, which quickly adds up in terms of time and productivity.

NOTE: We have simplified a lot of these headaches by having a single code base source code repository of our products for all versions of Dynamics GP, but it still does not mitigate the effort of installing all GP versions.

For BC, we wanted something self-contained, much simpler to maintain, that could easily be folded and recreated if needed, without burdening the developer with long winded software installations.


Paramount to the development environment is the ability to add features to different versions of BC without having to do any sophisticated branch management. With Dexterity, you have to branch the whole project and not just specific components in order to move to the next build. This is an issue, because, overtime, there are too many branches to manage. The idea of only branching the software components to be enhanced sounded very appealing, making the development environment and process, resilient in the long run.

Given all these requirements, we opted for deploying Business Central Docker images as this would provide the best of all worlds. We also would reserve the online Sandbox for our Sales and Support teams to test and learn new product features while allowing us to continue develop and test without interruptions. 

The first task at hand then, is the installation of Docker and download the BC container images. To keep each topic separated, please read Part 2 in this series.

Until next post,

Mariano Gomez, MVP

Thursday, April 4, 2019

#MSDYNGP: "Database must be compatibility level 130 or higher for replication to function" when setting up #MSDYN365BC Intelligent Cloud sync

As of recent, I've been honing on my Microsoft Dynamics 365 Business Central (BC) skills, without leaving my beloved Microsoft Dynamics GP behind. One of the things that I have been working on is making sure customers understand the BI insights gained via data replication between the two systems. As a result, I am always working through the replication configuration a few times a month.

Yesterday, I removed a previous Fabrikam company created via replication from BC and attempted a new replication - If you are not familiar with the configuration of the data replication process between GP and BC, I will be creating a video on this soon, so please stay tuned.

NOTE: The integration runtime service has also been updated, so you will probably need to download a new version.

After setting up the Integration Runtime Service and clicking Next to establish the connection between Intelligent Cloud and my on-premises GP, I received the following error:

"SQL database must be at compatibility level 130 or higher"

Knowing what the error meant, I realized my on-premise database server was SQL Server 2014, which happens to be the minimum database server requirement for Microsoft Dynamics GP 2018 R2. I couldn't change the system database compatibility level to 130 as this requires me to upgrade to SQL Server 2016.

The caveat however is, this replication was working at compatibility level 120, prior to my attempt at a new sync last night.

In doing some research and bouncing around a few emails, I was directed to the following article on the Community website:

Troubleshooting the Intelligent Cloud

The article seems to indicate that compatibility level 130 was a requirement since the January 2019 release, but also seems to suggest that this is only for the NAV / BC replication process, not GP. In fact, as I mentioned before, just a couple weeks ago, I was able to create the replication with compatibility level 120.

As it so happened, my attempt to replicate Fabrikam happened on April 2, 2019, which coincided with the April '19 release launch. As it turned out, this particular BC release introduced Intelligent Cloud synchronization for GP historical data. Since, this version of the sync uses JSON to track changes between the previous sync and the current one being executed, it requires databases to be at compatibility level 130 at the very least. This requirement wasn’t completely documented in the April '19 release notes but the release notes aren’t always 100% complete at the time of posting either.

With that said, customers need to be aware that historical data replication will require Microsoft SQL Server 2016 at the very least. These changes will be documented in the April '19 release notes and an entry will be added to the GP 2018 system requirements page.

Hope you find this information useful.

Until next post,

Mariano Gomez, MVP

Tuesday, March 26, 2019

#PowerApps: Numeric Up/Down control with persisted button press event using components

I just recently returned from the Microsoft MVP Global Summit 2019 where I had a chance to meet some of the top minds in the Microsoft PowerApps and Flow space. This was a truly exciting moment as I have been learning from the very same MVPs I met - yes, we do learn from each other!

In one of my hallway discussions, I ran into my buddy Mehdi Slaoui Adaloussi, Principal Program Manager at Microsoft, who I first met at Microsoft Build 2018. I had read Mehdi's recent article on reusable components and, in particular, that I had been playing with his version of the Numeric Up/Down Control.

See 10 Reusable Components: tab control, calendar, dialog box, map control and more.

I must start by saying that the components Mehdi put in place expose some very clever implementation techniques, so I highly recommend you download the msapp files and load them up in your environment and study them.

The Numeric Up Down control in particular, caught my attention as it required multiple and repeated individual clicks to advance the value up or down, which could take away from the user experience, so I decided to build from where Mehdi left off, by changing a few things.

NOTE: my implementation does not account for the control stylistic settings added by Mehdi, but this is sure an easy feat to accomplish.

Getting Started

NOTE: You will need to enable the Components experimental feature, before you can follow these steps.

1. Create a new component

Click on New Component under the Components tab to create your component. Rename the default name to NumericUpDn.

Components tab

2. Add the controls needed to create this component.

For this control, we will need the following 6 controls:

  • 2 button controls (from the toolbar)
  • The Up icon (from the Icon gallery)
  • The Down icon (from the Icons gallery)
  • A Timer control (from the Controls gallery)
  • A Text Input control (from the Text gallery)
I always recommend you worry about the layout and aesthetics at the very end of the implementation. Nonetheless, I keep the controls close for ease of organization at the end of the implementation. 

The most important thing right now is to get the needed controls. I will also explain the use of each control as we go along. 

3. Add Component custom properties

For the Numeric Up Down control, we will need 5 custom properties as follow:

Custom Properties

: Number / Input. This will serve to seed our numeric initial text input value when the control is first loaded within an app.

Min: Number / Input. This will be the lower limit for our numeric up/down control. When clicking the down button, we will check to ensure the control value itself never gets below the minimum value.

Max: Number / Input. This will be the upper limit for our numeric up/down control. When clicking the up button, we will check to ensure the control value itself never exceeds the maximum value.

Sensitivity: Number / Input. this will control how fast or slow the button press behaves to increase or decrease the numeric value in the text input field.

Value: Number / Output. This will be the value returned by the control to the calling app. 

4. Rename the Controls

Now that we have all the controls and custom properties in place, we will begin by renaming the controls for readability sake and ease of following - it's also a good practice.

Button1, rename to BtnUp
Button2, rename to BtnDn
Icon1, rename to IcnUp
Icon2, rename to IcnDn
TextInput1, rename to NumValue

Rename component controls

NOTE: renaming the Timer control seems to break the timer itself - this is a bug I have reported to the PowerApps team.

5. Add some logic

NumValue control: for this control, change the Format to Number. We will want to ensure the text input control Default property is set to the incoming Default custom property value if the initial value is blank, as follows:

Timer1 control: this is perhaps the most important control on the component, since it will basically control the overall behavior of the timer. First, let's set some properties:

  • Start property. We will want to start the timer when either the BtnUp or BtnDn pressed events are fired. Since the Start property is a true/false control (boolean) we can set the property to BtnUp.Pressed || BtnDn.Pressed

  • Duration property. We will set this property to NumericUpDn.Sensitivity. Basically, we are setting a delay between each increment or decrement of the NumValue control.

  • Repeat property. Set to true. Since we want to persist the button press event, we want the timer to restart each time after the Duration cycle is completed.

  • Reset property. We need the Timer to reset each time either button is released from a Pressed state. Hence, we can use the same true/false state as the Start property, BtnUp.Pressed || BtnDn.Pressed.

Timer control Data settings

Phew! We are done with the basic settings for the timer control settings.

Next, the timer must perform a couple actions: 1) on start, it will evaluate which button was pressed, and based on the button, increase or decrease the value in the text control. 2) on end, it will evaluate whether we've reached the lower or upper limits established by the Min and Max custom properties, respectively.

For the OnTimerStart event,


For the OnTimerEnd event,

OnTimerEnd event

BtnUp and BtnDn controls: we also want a user to retain the ability to click the buttons without persisting the pressed event, effectively advancing the NumValue control one by one until the upper or lower limits are reached. Hence we must also add some validation to the OnSelect event of each button.

For the BtnUp OnSelect event,

BtnUp OnSelect event

For the BtnDn OnSelect event,

BtnDn OnSelect event

6. Now some aesthetics

We have completed our low code implementation. Now, we are off to setting organize the controls and set some properties that will make this a useful control.

a) Select the BtnUp control and set the size to 62 width and 24 height; set the x and y positions to 257 and 2, respectively. Set the Border Radius property to 5. Set the Fill property to RGBA(56, 96, 178, 0). Set the BorderColor, HoverColor, and HoverFill properties to BtnUp.Fill. Clear the Text property (blank).

b) Select the BtnDn control and set the size to 62 width and 24 height; set the x and y positions to 257 and 26, respectively. Set the Border Radius property to 5. Set the Fill property to RGBA(56, 96, 178, 0). Set the BorderColor, HoverColor, and HoverFill properties to BtnDn.Fill. Clear the Text property (blank).

c) Select the IcnUp control and set the size to 64 width and 25 height; set the x and y positions to 256 and 1, respectively. Set the Fill property to RGBA(0, 18, 107, 1); set the Color property to RGBA(255, 255, 255, 1).

d) Select the IcnDn cotrol and set the size to 64 width and 26 height, set the x and y positions to 256 and 26. Set the Fill property to RGBA(0, 18, 107, 1); set the Color property to RGBA(255, 255, 255, 1).

NOTE: By setting these properties, the buttons and the icons are now overlaid on each other. To further access these control properties, use the control navigation pane on the left of the Design Studio.

e) Select the NumValue control and set the size to 256 width and 51 height, set the x and y positions to 0 and 1

f) Finally, set the NumericUpDn component size to 322 width and 57 height.

You should now have something that look like this:

NumericUpDn component (shown at 150%)

7. Testing the Component

To test the component, I have added the control from the Components gallery, a slider for the timer sensitivity, and a couple Text Input boxes, along with a label to track the output from the componentized control. You can quickly guess what goes to what.

NOTE: please ensure the Text Input boxes are of numeric type.

Test Screen 

The end result can be appreciated in this video.

You can download the component control from the PowerApps Community Apps Gallery, here.

Until next post,

Mariano Gomez, MVP

Thursday, March 7, 2019

#MSDYNGP: Named Printers and Redirected Printers in RDP environments

A lot of the guiding principles for deploying Named Printers in a Terminal Server or Citrix environment comes from two of my favorite articles, written by my good friend and fellow Microsoft Business Applications MVP, David Musgrave (twitter: @winthropdc). David happens to be the creator of Named Printers and probably understands the product better than anyone I know. You can read his articles here:

Using Named Printers with Terminal Server
Using Named Printers with Terminal Server Revisited

These articles continue to be very relevant if you are in an environment where a Print (or Printer) server is the norm and published printers are standard. Print servers are used to interface printers with devices in a network, but mostly to standardize administrative policies, and balance the document load that printers can manage. Part of the standardization is to ensure printers are uniquely identified across the networks, regardless of whether you are accessing the network remotely or physically connected to it. Print servers also ensure that print drivers are consistent across the network, which in turn reduces the possibility of driver clashes or unsupported drivers.

If you are familiar with Named Printers, one of the things it likes is standard drivers and standard printer names. The minute the binary information - stored at the OS level - about a printer driver or name no longer matches the binary information stored by Named Printers - at the database level - about the same printer, chances are Named Printers will cease to work properly. However, in a print server environment with published printers, this is easily fixed by reconfiguring the printer properties in Named Printers.

But, why am I telling you this? In the era of BYOD and remote offices, system administrators no longer have the time or the willingness to be dealing with such mundane tasks as worrying about printers and drivers. Heck, most of us work from our home now or a roaming between different offices. Yet, as users, we still need the ability to perform the simple, mundane task of printing documents and generate reports from our ERP system. Enters printer redirection.

Printer redirection was first implemented in Windows 2000 Server.  Printer redirection enables the users to print to their locally installed printer from a terminal services session.  The Terminal Server client enumerates the local print queues to detect the locally installed printers.  This list is presented to the server and server creates the print queue in the session. The TS client provides the driver string name for the locally installed printers and if the server has matching drivers installed then the printers will be redirected.  When we look at Printers on the Terminal Server, a redirected printer will have a name similar to what is shown below:

Note the printer name is presented with a Printer_Name (redirected sessionId) label. The session Id changes each and every time the user logs in and logs out of the terminal services session. Given what we know about Named Printers, it's safe to say this would wreak havoc causing errors, like the following, to show up during printing:

Document_Printer "Printer_Name (Redirected SessionId)" or PaperSource "sourceInfo" is not valid

You can go back into Named Printers and recapture the printer properties if need be, but the same will need to be done each and every time a user logs in and logs out of the terminal services session. If you have more than one user directing documents to the same physical printer via Named Printers, then this solution (recapturing the printer properties) is simple unusable.

So, what can be done?

Thinking about the problem, I realized this could not be just a Microsoft Dynamics GP/Named Printers issue. There are a multitude of applications designed to capture and store printer properties they rely on to consistently create a print experience. I started wondering how others are dealing with this issue. So onto Google I went to request search terms like "rename printers", "rename redirected printer", etc., I finally ended up with a very interesting hit on a company called Babbage Technologies, located in Minnesota. Babbage have a small product called RenPrinters which in essence applies a regular expression to the redirected printer name and allowing you to specify a static name with a combination of the printer name, user name, and machine name. You can pick and choose which combination to use. This is done at the server operating system level, which then allows you to map that static named printer to Named Printers.

The following shows the main application control panel:

There are a number of predefined regular expressions along with a number of predefined printer name formats. You could configure named printers to use a template user name scenario or create a template per machine depending on your specific needs. Another important feature is the ability to exclude printers using specific drivers from being renamed, giving you greater control over how the application behaves.

A server reboot and now the printer appears as defined by the Printer Name Format expression:

This is super useful now as Named Printers is once again happy: standard printer name, standard properties!

Now, to be fair, there are other solutions in the market. There's an open source solution called Printerceptor currently available on GitHub. Printerceptor uses PowerShell to rename redirected printers and uses the same concept of regular expressions and name formatting to do the job. Of course, open source means you are subjected to the developer's availability to fix a problem, if one is found.

Hope you found this informative and helpful.

Until next post,

Mariano Gomez, MVP

Monday, March 4, 2019

#PowerApps: Using Components to create a Digital Clock - Part 2

In Part 1 of this series, you saw how my first version of the digital clock went. Although it got the job done, it was plagued with repetitive code, repetitive controls, and over saturation of variables, which in turn rendered the application hard to follow, and worse yet, affected performance.

In this article, I will show how to use PowerApps Components to promote reusability and decrease the code footprint. Components is currently a preview feature, hence word of caution when using them as you may need to retest your app once it becomes generally available.

The previous experience showed us that we can save time and code by creating a component to be used for the digits of the clock. This digit component could then be enhanced by allowing the developer to pass in the digit to be displayed and the foreground and background colors the segments - all set up as custom properties to the component - as shown here:

We have also added code for each of the segments that will bring them to the foreground or place them in the background, based on the DigitValue custom property. Here's a code snippet for Fill property for the top segment of our digit:

Note that here we need to reference the name of the component within the scope of the variable. All the code to implement the additional segments can be found in the previous article or by downloading a copy of the msapp file for this project.

Once we have the component in place, we can then move to app surface, where we add the 3 digits as components, the separating dots, and 4 timers as in our previous app. Since the code to activate the segments is in the component itself (as shown above), there's no need to add 3 buttons to encapsulate that code anymore.

Hence our first timer control, Timer1, will simply do 2 things:

  • On start, it will evaluate the night mode toggle and set the proper background and foreground depending on the setup parameters (on the Setup screen)
  • On end, it will advance the digit counter. 

NOTE: Each timer is set to 1000 milliseconds with the Repeat property set to true.

The end result is a super streamlined application, with a reusable component and little code to go along, keeping up with the Low Code spirit of PowerApps.

For the full implementation of this project can be found on the PowerApps community website, here.

Until next post,

Mariano Gomez, MVP