PowerShell Script Support for Installing and Uninstalling Intune Win32 Applications

This is a quick opinion post on Microsoft’s unveiling of PowerShell script support for installing and uninstalling Intune Win32 applications.

Traditionally, we specified an install and uninstall command line when configuring Intune Win32 applications:

Intune Win32 Apps Install Command

We might specify a command line similar to:

powershell.exe -executionpolicy bypass -file .\deploy-application.ps1

Easy enough, right?

But nowadays, tenants with PowerShell script support for installing and uninstalling Intune Win32 Apps will see that there is another option when deploying Win32 apps in Intune – “PowerShell script”:

PowerShell Script Support Intune for Win32 Apps

This new option allows administrators to paste a chunk of PowerShell code into the Intune console to install and uninstall applications, as opposed to just specifying a command line that points to a PS1 script inside the compiled and encrypted .intunewin payload.

But i don’t see the point of it.  And I don’t necessarily like it.

Microsoft mentioned support for PowerShell scripts in Intune Win32 back in September 2025, and there is now a more recent article that explains when to use the PowerShell script option as opposed to the command line option.

To quote the article:

Consider using PowerShell script installers when:

  • Your app requires prerequisite validation before installation
  • You need to perform configuration changes alongside app installation
  • The installation process requires conditional logic
  • Post-installation actions are needed (like registry modifications or service configuration)

 

Uh?  So in other words, absolutely no benefits from calling a PS1 script from within the .intunewin.  What’s more, these scripts (stored in the application metadata) are limited to 50KB in size?

I’ve read other blog posts on this new feature, explaining why this new functionality is “advantageous” over the command line option.  Reasons such as:

  • “Script changes no longer require a full rebuild and reupload of the app.”
  • “Reduced overhead of testing, repackaging and updating.”
  • “Difficult to see code when it’s compiled inside an Intunewin.”
  • “Can now see the code directly in the Intune portal.”

And I still don’t get it.

To expand on the first two bullets (which made me slightly sick in my mouth when I read them), what we’re essentially advocating here is making changes on the fly in the Intune console without the application being version controlled, and correctly tested and documented. So in other words, this new feature endorses wild-west application management – something which I hate.

And on the last two bullets – surely we all store the original package/source code in an uncompressed, pre-compiled, version-controlled, secure environment with sufficient data retention policies in place?

At the moment, all I see with this new feature is a license for cowboy administrators to make undocumented changes on-the-fly.  When in reality, if the install or uninstall script changes, we should then be releasing a new version of the package, regardless of how small the change is.  Which shouldn’t be too arduous, considering a large part of our application packaging process is automated? (isn’t it?)

Running the x64 Powershell from x86 SCCM ConfigMgr

This post describes running the x64 Powershell from x86 SCCM ConfigMgr.  I stumbled upon an issue when I’d created a powershell script to manage App-V 5 packages.  On an x64 platform, we tried to launch the powershell script with the x64 version of Powershell (i.e, the version in %windir%\system32) via an SCCM 2007 program like so:

%windir%\System32\WindowsPowerShell\v1.0\PowerShell.exe .\AV5Admin.ps1 {parameters}

Upon running this program we received the following Powershell error:

The current processor architecture is: x86. The module ‘C:\Program Files\Microsoft Application Virtualization\Client\AppvClient\AppvClient.psd1’ requires the following architecture: Amd64.

This immediately made me think that SCCM was launching the x86 Powershell console as opposed to the x64 one.  But strangely enough, the execmgr log file still said it was running the x64 Powershell at %windir%\System32\WindowsPowerShell\v1.0\PowerShell.exe.  This log file entry was incorrect, and was just spitting out the exact command line in the SCCM program as opposed to what was actually running.

It turns out that the program execution environment in the SCCM 2007 ConfigMgr is 32-bit, and hence the x86 version of Powershell was being launched.  You can find out more information on the redirection process here.  To circumvent this issue, the command line needed to be tweaked to use the SysNative alias like so:

%windir%\Sysnative\windowsPowershell\V1.0\PowerShell.exe .\AV5Admin.ps1 {parameters}

WOW64 recognizes Sysnative as a special alias used to indicate that the file system should not redirect the access, and should instead use the native system folder. This alias was introduced in Windows Vista and can not be used with x64 applications.  Note that you can ONLY use this alias from 32-bit applications, and it would not be recongnised if, for example, you used it from a 64-bit instance of cmd.exe.

As an example, if you launch cmd.exe on an x86 platform, and run the following command:

%windir%\sysnative\notepad.exe

You will get the error:

The system cannot find the path specified.

Because the Sysnative alias is not supported on x86 platforms.  Also, if you launch the same command on an x64 platform, using the native x64 cmd.exe, you will get the same error because the Sysnative alias is not supported on x64 applications.

However, if you launch an x86 version of cmd.exe on an x64 platform and run the same command, it will launch the native (x64) version of notepad.exe.

Out of interest if you just run the command:

notepad.exe

from the x86 version of cmd.exe on an x64 platform, it will launch the x86  version of notepad.exe.

 

Using PowerShell and FTP to Create a Directory

Here’s an example of using PowerShell and FTP to create a directory using credentials to authenticate.  If there is an error during the process, we try to see if it’s because the directory already existed.  If not, then it must be down to another issue and we can handle the exception accordingly.

$newFolder = "ftp://servername/newfolder/"
$ftpuname = "username"
$ftppassword = "password"

try
 {
	$makeDirectory = [System.Net.WebRequest]::Create($newFolder);
	$makeDirectory.Credentials = New-Object System.Net.NetworkCredential($ftpuname,$ftppassword);
	$makeDirectory.Method = [System.Net.WebRequestMethods+FTP]::MakeDirectory;
	$makeDirectory.GetResponse();

	#folder created successfully

}catch [Net.WebException] 
{
	try {

		#if there was an error returned, check if folder already existed on server
		$checkDirectory = [System.Net.WebRequest]::Create($newFolder);
		$checkDirectory.Credentials = New-Object System.Net.NetworkCredential($ftpuname,$ftppassword);
		$checkDirectory.Method = [System.Net.WebRequestMethods+FTP]::PrintWorkingDirectory;
		$response = $checkDirectory.GetResponse();

		#folder already exists!
	}
	catch [Net.WebException] {				
		#if the folder didn't exist, then it's probably a file perms issue, incorrect credentials, dodgy server name etc
	}	
}

 

Using Powershell, CAML queries and SOAP to Read from a Sharepoint List

Similarly to my post here this post describes how we can use Powershell to extract information from a Sharepoint list.

$list = $null             
$service = $null  

# The uri refers to the path of the service description, e.g. the .asmx page            
$uri = "http://xxx/_vti_bin/Lists.asmx"

# Create the service            
$service = New-WebServiceProxy -uri $uri -Namespace SpWs -UseDefaultCredential   
$service.url = $uri

# The name of the list             
$listName = "331609D1-793D-4075-BC88-570956C6D729"             

$xmlDoc = new-object System.Xml.XmlDocument

$queryOptions = $xmlDoc.CreateElement("QueryOptions")
$queryOptionsString = "<IncludeMandatoryColumns>FALSE</IncludeMandatoryColumns><DateInUtc>TRUE</DateInUtc><ViewAttributes Scope='RecursiveAll' />"
$queryOptions.set_innerXML($queryOptionsString)

$query = $xmlDoc.CreateElement("Query")
$queryString = "<OrderBy><FieldRef Name='Title' Ascending='TRUE' /></OrderBy>"
$query.set_innerXML($queryString)
$rowLimit = "999"

try{
$list = $service.GetListItems($listName, "", $query, $viewFields, $rowLimit, $queryOptions, "")
}
catch{
[System.Exception]
write-host ($_.Exception).Message
}

Once I’d received the list of values back as a System.Xml.XmlNode object, I added them to a Combobox like this:

foreach ($node in $list.data.row) {

    $customerBox.items.add($node.GetAttribute("ows_Title")) | Out-Null

}

Incidentally, I needed to retrieve some more values from my Sharepoint list for use later on. Rather than doing another Web service request, I decided to store the values I needed during the first request so I could use them later. I updated the for loop above like so:

foreach ($node in $list.data.row) {

$FTPhash.Add($node.GetAttribute("ows_Title"),(@{
        "uname" = ($node.GetAttribute("ows_FTP_x0020_Username"))
        "pword"   = ($node.GetAttribute("ows_FTP_x0020_Password"))
    }))

    $customerBox.items.add($node.GetAttribute("ows_Title")) | Out-Null

}

You can see that we’ve used a hash table to store the results. Not only is it a hash table, but it’s a hash table inside another hash table!!

A hash table is similar to the VBScript Dictionary object in that it stores key-value pairs. In our example above, the key is:

$node.GetAttribute(“ows_Title”)

and the value is another hash table of:

@{
“uname” = ($node.GetAttribute(“ows_FTP_x0020_Username”))
“pword” = ($node.GetAttribute(“ows_FTP_x0020_Password”))
}

and how do we retrieve these values?  Easy…

$ftpuname = $FTPhash.Get_Item($customerBox.SelectedItem).uname
$ftppassword = $FTPhash.Get_Item($customerBox.SelectedItem).pword

 

**NOTE**

I’ve just used this code in another environment and for a different client.  I was getting:

Exception calling “GetListItems” with “7” argument(s)

And the detailed exception didn’t tell me much.  It turns out it was to do with the service URL.

If i initially created the service with a URL such as https://domain.co.uk/sites/IMT/deskserv/_vti_bin/Lists.asmx and immediately afterwards I echoed out:

$service.url

It would return the URL as being: https://domain.co.uk/sites/IMT/Lists.asmx!!    (Notice the difference!) This is actually the site URL!  Hence I now explicitly set the URL of the web service proxy after we create it using:

$service.url = $uri