Skip to content
Home » Blog » Not so Unlocked Packages

Not so Unlocked Packages

TL; DR;

Hard-learned lessons using Unlocked Packages with Namespace. Well worth it when done right and when you know what to not use them for. Learn about the many things you cannot do with them below.


Motivation

First of all I have to set the scene. Not all the issues will apply depending on your specific context. We are developing several independent applications that have a shared base. All development is currently in-house, targeted to a single Production Org. However, there is potential for many more applications, teams, providers and even Orgs. None of the Applications are really meant for public distribution though. Not yet anyway. Strict version control and a robust release process are a must. Security is an important topic.

Managed Packages seemed a bit too far since we are not really worried about exposing the source code and want to be able to debug easily. I still felt that the real physical boundary of a namespace would be useful to ensure security and that we don’t create accidental dependencies between different packages. Unlocked Packages with Namespaces sound like the perfect solution. And they are. I still firmly believe that. Even after running into (quite) a few important limitations that are not exactly well documented. If at all.

BTW, I have had Salesforce support tell me that “Unlocked Packages with a Namespace are essentially equivalent to Managed Packages”. Some might find this understandable. I do not understand the fact that this sort of message is not screaming at us from the documentation about Unlocked Packages. I guess mostly because it’s not entirely true. Even if damn close sometimes.

The Limitation of Unlocked Packages with Namespace

Below I list the issues and gotchas we have faced so far. I hope it will save you some time. I sure wish I knew all of it ahead of time. I will keep updating this article as I find new ones. Please feel free to get in touch and contribute!

Tests are no longer local

You are no longer able to use the –-test-level RunLocalTests option to run all the tests in your project outside the Scratch Org. Documentation says “except tests from managed packages”, but it actually should say from namespaced packages. 

RunLocalTests — All tests in your org are run, except the ones that originate from installed managed packages.

It makes sense, but it may not be immediately obvious. Local basically means in the same namespace as the org. Working in scratch orgs you don’t have a problem since namespaced projects are developed in namespaced scratch orgs. There the tests are actually local. 

Given you are developing a package you could argue that tests are part of creating the package and don’t need to run again when installing in the target orgs. And you are right, really. Also, running all tests in the org will run the test in the namespace. But it will also run any other managed tests. And those are usually outside of your control so you might not want to have your pipelines delay or fail because of them.

My particular problem was generating a code coverage report after running tests post merge to the integration sandbox for upload to SonarQube. It’s a snadbox, there is no namespace and we are already installing (beta) packages. I created a script to build an accurate test suite to run my specific tests from the namespace (which works), but the exported report does not include the coverage, behaving as if the tests were in a managed package (with hidden source code).

Cannot change published @api attributes of LWC components

This is mentioned in the documentation for Managed Packages. It’s a perfectly valid thing, protecting subscribers from breaking changes in packages from ISVs. But Unlocked packages are meant for in-house development where, more often than not, the subscriber is also the developer and a lot more freedom is allowed.

Still you cannot do the same changes to components marked as “exposed” for the Lightning App Builder, just like in a Managed Package. And once a component is marked as “exposed” it cannot be changed back. It may be possible to fully delete the component but I haven’t gone through with that yet.

Cannot use Permission Set Groups

Well you can, but they don’t work as expected. You can create Permission Set Groups. You can add Permission Sets to them. But you cannot remove a Permission Set from an existing Permission Set Group. Not via package upgrade – there is no error, it just does not get removed, and not manually either – this time with a “cannot remove a managed component” error.

I had a lot of fun with this one as SF Support confirmed this “working as expected” because of the namespace. Apparently Managed packages are supposed to behave like that and Unlocked Package with a Namespace follows suit. Except that documentation for managed packages says you definitely can remove Permission Sets from Permission Set Groups via an upgrade (no you can’t). So we are still at it. I might be getting close to a bug report and shall update you when I have results.

Support did say it would be a useful feature though so you can show some love to this Idea that got created as a result.

Update 2024-05-02: Support have said that the line “”You can add or remove permission sets in permission set groups as part of a package upgrade. Subscribers can also modify the permission set groups by muting permissions or adding or removing local permissions sets.” will be removed from documentation soon. The Work Item to allow this for 2nd Gen Packages is updated to cover both Managed and Unlocked packages.

Cannot extend Lightning Knowledge at all

Specifically, adding any Fields or Record Types to the Knowledge__kav object is not working. You can create the package, you just can’t install it, getting the following error:

Entity not available 
The Entity ‘Knowledge__kav’ is not found. Contact the vendor for more details.

This caused some major headaches for us and we ended up massively complicating our package structure because of it, breaking some dependencies that really should exist. But eventually it was acknowledged as a bug. Though we don’t have any time estimate on getting this fixed. Talk to your Account Executive – that’s the current advice.

Cannot package ProfileActionOverride

This means FlexiPage Activation via App/RecordType/Profile combo as part of a Custom Application. Not strictly a namespace issue as the same problem appears in standard Unlocked Packages. It is actually documented, but you have to click through from the Metadata Coverage Report which mentions Custom Application is supported (without asterisks) to the Metadata API detail page and scroll to the ProfileActionOverride section.

Activation via App (ActionOverride) works ok BTW.

Cannot include Public Groups and Queues

This one should be obvious because it’s mentioned in the Metadata Coverage Report (for once). But if you were to try and add them, it all works fine so long as you only build without full validation. You can even install the beta package in sandboxes and get the Groups and Queues created. They are not part of the namespace though and once you try to do a proper build with validation (so that you could promote a package) that’s when the error comes.

Cannot package Custom Field Encryption Scheme

This one is probably not specific to namespaced packages. I originally observed some very strange behaviour where it would work for Unlocked Packages without namespace but later I was no longer able to reproduce that. Must have been my mistake. You can see more detail of what I was going on about in my StackExchange question.

Custom Fields are marked as supported for packaging. And EncryptionScheme is part of the Custom Field metadata XML. It is also possible to add Platform Encryption as a required feature for the package definition file. However, generating a package version which should include encrypted fields does not fail without this feature in the definition file and the field will not be encrypted upon install even if suitable Encryption is available in the target org. And probably this makes sense as the Subscriber should control what they want to Encrypt.

Cannot make @AuraEnabled method available to components outside of the package.

Not even in the same namespace. There is no equivalent to @NamespaceAccessible in this scenario.

To be fair this one is also documented and fairly understandable. Although with Unlocked Packages without a namespace this is not the case. With those your controller and component don’t need to be in the same package.

Specifics of a “Namespaced” project

SFDX projects can only have 1 namespace. While you can develop multiple packages in the same project, they all have to have the same namespace. Well actually that’s not true. You can have namespace and no-namespace packages in the same project (I know we do). All you have to do is specify --no-namespace when creating the package in the DevHub. It will just complicate your workflow and your validation process quite a bit.

Typically, you do not need to explicitly use the Namespace prefix in your metadata. You are developing against an org (Scratch Org) which has the namespace associated with it (this happens automatically when creating a Scratch Org from an SFDX project with a namespace specified, unless you said –-no-namesapce) and Salesforce will therefore manage this for you. Once you build the package, the Namespace will be used appropriately when referencing all the Objects, Fields, Classes and so on. Well.. in most cases.

So far I found you have to explicitly use a fully qualified name of a metadata components (including namespace) in these situations:

Serialising JSON into SObjects in Apex

Like when creating in-memory SObject instances for mocking relationships for instance. Watch out though, if you don’t specify the namespace in the String field name, it does not cause an Exception to be thrown. Instead the resulting SObject instance just does not have a value in that field. You can use Schema class to get the full name instead. This is a better approach, even if a little wordy.

CustomObject__c mockRecord = (CustomObject__c) JSON.deserialize(
    JSON.serialize(
        new Map<String, Object>{
            'Id' => FakeId.build(CustomObject__c.SObjectType),
            'PragBear__Status__c' => 'New', //good
            'IsClosed__c' => false, //no good if the field is part of namespace           
             CustomObject__c.Type__c.getDescribe().getName() => 'Complaint' //good
         }
     ),
     CustomObject__c.class
);

Referencing Static Resource in Field formulas

Now this one was really strange. We’re talking about IMAGE formulas that help you create visual icon fields. For some reason, on Windows computers only, the images did not load correctly. It could have been due to some specific infrastructure limitations in place at the corp. Worked fine on a Mac. Once the full name of the static resource with the namespace was used, it worked everywhere.

CASE(
    TEXT(Status__c),
    'GREEN', IMAGE('/resource/PragBear__svg_resources/green.svg', 'Great', 20, 20),
    'ORANGE', IMAGE('/resource/PragBear__svg_resources/amber.svg', 'Good', 20, 20),
    'RED', IMAGE('/resource/PragBear__svg_resources/red.svg', 'Bad', 20, 20),
    ''
)

Permission Set Group

You need to always use the full name of the Permission Set with the namespace when listing Permission Set included. But you should not add PSGs to a namespaced package anyway (see the issue above). At least until Salesforce resolve the problem.

<?xml version="1.0" encoding="UTF-8"?>
<PermissionSetGroup xmlns="http://soap.sforce.com/2006/04/metadata">
    <description>Intended for developers using the Salesforce CLI giving full access to Scratch Orgs and Packages.</description>
    <hasActivationRequired>false</hasActivationRequired>
    <label>Developer</label>
    <permissionSets>PragBear__SfdxDeveloper</permissionSets>
    <permissionSets>PragBear__ApiAccess</permissionSets>
</PermissionSetGroup>

String literal SObject and Field names in LWC

For instance accessing the field values of the object returned from a wire. This makes sense, really, as there is no real binding to resolve the namespace. Using imports from @salesforce to get SObject and Field names will work better here, but you then cannot user the dot-notation the same way.

import MY_FIELD from '@salesforce/schema/MyObject__c.MyField__c';
...

@wire(getObjectInfo, { objectApiName: MY_OBJECT_NAME })
record;

get someValue() {
    //have to specify namespace
	return record?.data?.fields?.PragBear__SomeField__c?.value;
}

get someOtherValue() {
    //binding resolved automatically, no need to specify namespace
    return record?data.fields[MY_FIELD.fieldApiName]?.value;
}

Same applies in the HTML mark-up. Either specify the namespace in a literal string, or use getter to expose a fully qualified name obtained via an import instead.

<lightning-output-field
    field-name="PragBear__MyField__c"
    class="slds-p-vertical_none"
></lightning-output-field>

..

<lightning-output-field
    field-name={myFieldName}
    class="slds-p-vertical_none"
></lightning-output-field>

Not when using LWC in markup

You do not use your namespace when adding an LWC component in the markup of another. Instead you still use the “c” namespace. So if you had a component named fancyComponent in an Unlocked package with the namespace PragBear you would the use it like this in other components.

<template>
    <!-- Good -->
	<c-fancy-component></c-fancy-component>

    <!-- Does not work -->
	<pragbear-fancy-component></pragbear-fancy-component>
</template>

Conclusion

There will surely be more things that come up as we go. Please let me know if you have other situations that are worth sharing. Unlocked Packages with Namespace are (IMO) extremely useful. If only they had more / better documentation.

My current ideal project setup uses them (at least) for all Objects, Apex, LWC, PermissionSets and so on. The more control you need over a component (like Apex – don’t allow anyone outside of your package to call anything you didn’t design to be exposed) the more useful it is to be part of a namespace. Sometimes even FlexiPages or Custom Applications can be in there. Never Permission Set Groups though!

Getting as close to an ISV-like approach is a good way to ensure proper modularisation. However, each “in-house” project (where you ultimately control a specific PROD org) needs this “org-level” repo for managing things the Administrator would managed manually. The tweaks to installed packages, configs or global metadata like Search Layouts and more. These things will never fit into a package.

Where to next

DML (Mock) Service

DML (Mock) Service

Apr 6, 20226 min read

TL; DR; Apex utility wrapping DML operations and logging their results consistently. And a mock version of the same allowing…

Picture illustrating waste

Wasting Time in Disabled fflib Trigger Handlers

Jun 27, 20234 min read

Did you know that you can dynamically disable fflib_SObjectDomain Trigger Handlers? You did? Ok then, did you know that when…

5 3 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x