Quantcast
Channel: SAP Identity Management
Viewing all 170 articles
Browse latest View live

MMC Errors

$
0
0

I've been working with various versions of SAP IDM for 9 years now, back to my days at MaXware.  I commonly tell people "There's not an error you've seen with the product, that I haven't seen before"  Well I was wrong... Recently users of our project's current development system were getting the following error, which I had never seen before:

 

MMC Error.png

Upon investigation and some support assistance we found that there were multiple definitions of the same data source in the EMSConfig.XML file. As the EMSConfig.XML file is rather important, (it holds many MMC specific settings and configurations from Tools/Options) make sure you back this file up before editing it.

before.png

 

If you're wondering, I'm using Microsoft's XML Notepad 2007 to view the file.  Nice tool, renders the XML of EMSConfig.XML much nicer than notepad or notepad++.  Note that there are two EMSDB entries both with a name of Clone.  Also note that one value is populated and the other is not.  After a quick cleanup, the following results:

 

after.png

 

Everyone can log into the MMC now.  Problem solved!


IDM Data Structure - Query Writing

$
0
0

Hi All,

 

Could someone please help? Im trying to understand how the data is structured within an IDM Installation. I have read the technical schema for the entry types but it doesn't explain how the data is referenced from one table to the next (e.g MXI_VALUES and MXI Attributes make up part of the view MXIV_SENTRIES. I basically want to understand the structure of the data for query writing. I have also read it works on the Extendable Functional Data Model which I would like to understand.

Small tip: Setting the System Privilege Modify Trigger Attributes programatically

$
0
0

I recently joined a nicely ongoing IdM 7.2 project and one of my tasks is to load contact information to all existing users from phone book type of application that is about to be killed. This bulk load will naturally trigger lot of update operations unless attention is paid to what attribute modification should actually be replicated to target system.

 

One of my own "rules" with SAP IdM is to avoid any un-necessary processing and running modify tasks to repositories where those particular attributes may not even be relevant (or isn't even mapped in the toSAP/toLDAP etc pass) is such action.

 

The standard initial load job template sets the Trigger Attributes for System Privileges via script that I've found a little awkward, so thought it is best to create my own job that would also give the customer more control and actually a way to maintain the trigger attributes. The list of attributes can be maintained in MMC / Privilege Metadata, but selecting the attributes from long list is bit tedious.

 

So as it looks like today that I am going to use the job again, thought about sharing it here if it helps anyone else.

 

What the job simply does is that it fetches all the System Privileges, gets the comma separated list of attribute names from repository constant (or from global constant if the repository constant is missing/not defined for that particular repository), gets the matching Id Store specific attribute ids and stores them to System Privilege's MX_MODIFYTASK_ATTR-attribute.

 

Global/repository constant

SetModifyAttrs-GlobalConstant.jpg

 

Source-tab

The source SQL is just example of getting the names of the System Privileges.

SetModifyAttrs-sourceTab.jpg

 

Destination-tab

The target tab handles each of the System Privileges and sets the attributes.

SetModifyAttrs-destinationTab.jpg

 

Scripts

The script "setAttrs" sets the trigger attributes. The name of the System Privilege the job is currently processing is passed in the Par-parameter, the script gets the repository name from the privilege name, tries to get the repository constant/global constant, gets the attribute ids from attribute names from SAP Master Id Store and returns them in pipe-separated multivalue string.

SetModifyAttrs-setAttrsScript.jpg.jpg

 

The u-function uGetRepositoryVar works with passed that are working in repository context but for a standalone job you need to write you own script that takes the repository and variable/constant names as parameter. "NULLATTR" is special hardcoded value for SAP IdM that acts as a "do nothing" value for attribute, handy to return in error case when you want to make sure nothing gets updated.

SetModifyAttrs-getRepositoryVariableScript.jpg.jpg

Power of Delta option in a "To Ids Pass"

$
0
0

I would like to add few things in addition to Delta handling covered by Ian Daniel in his blog. The Delta option is used more often in a "To Pass"  rather than in a "From Pass". It is a very powerful feature which helps to reduce the load on the network and target systems. While enabling the delta, a delta table name is provided. During its first run, this table stores all the entries in hashed format. In subsequent runs, the system compares the hash value of the new record with the hash value stored in delta table.  If it is different or missing, it writes the entry and stores the hash value in delta table. If its the same, it marks the record as processed in the delta table and does not write any thing to the target.

 

1.jpg

 

An LDAP is most often used to load users into IdM system. In such cases, a Delta could be enabled to avoid reading all the users everyday. A field to pay more attention is the "Max limit for mark for deletion" and "Max real updates". The former value has to be provided based on the average number of users deleted/terminated from your organization. Make sure you set that to a very reasonable value based on your organization. The last thing you will want to see is all users in the target system being deleted just because of some corrupt data passed into IdM from a source system (Single Point Of Failure). If you set the value to say 3%, and if the number of records being deleted exceeds this value, nothing will be marked for deletion in the Delta table and no records would be deleted.

IdM integration with BW for Reporting

$
0
0

In this blog I would like to explore how the BW Reporting is enabled for Identity Management.  Identity Reporting Using SAP NetWeaver Business Warehouse - Implementation Guide (http://scn.sap.com/docs/DOC-17058) covers all the steps to enable reporting. I will try and get into more details of the configuration with screen captures.


The below components are involved in the landscape. Identity center communicates via LDAP protocol to the VDS. The BW configuration in VDS is used to make a Web service call to BW system to pass all the identities and attributes.

1.jpg

 

By using the job wizard in Identity Center, the standard “IdM to BW Data transfer” job can be created in IdM

2.jpg

As shown above, there are several passes which are executed to achieve the data transfer from IdM to BW

 

Prepare Delta Criteria:


IdM assigns a change number for all the changes performed on an Entry/attribute and records this change number in database tables. The objective of this pass is to obtain the change numbers for all current attribute records, current link records and history records. The value is stored temporarily in local job variables which will be used for processing further down the line.  This “To Generic” pass runs a script. Below is the code snippet

 

3.jpg

 

By the end of this script execution, there would be 3 local job variables created to hold the change numbers

  • DELTA_NEW_LAST_CHANGE_NO_CURRENT_ATTRS
  • DELTA_NEW_LAST_CHANGE_NO_CURRENT_LINKS
  • DELTA_NEW_LAST_OLD_ID_OLD_ATTRS_LINKS

 

 

Transfer Current Attributes:


This is a “To LDAP” pass which makes a call to the VDS passing attributes for all the records fetched by the below SQL executed on table idmv_bw_current_values.

 

4.jpg

The function sap_getNewLastChangeNoForCurrentAttrs() would return the value contained in local job variable DELTA_NEW_LAST_CHANGE_NO_CURRENT_ATTRS. 

The function sap_getSQLDeltaCriteriaForCurrentAttrs() is added at the end to provide a dynamic where clause.  If this is not the initial load, it forces the system to use “AND ChangeNumber > “+ <<permanent job variable>> at the end of the SQL statement to retrieve latest records (which have not been sent to BW). The value for the <<permanent job variable>> will be set in the last step shown below.

 

Transfer Current Links:


This is a “To LDAP” pass which makes a call to the VDS passing attributes for all the records fetched by the below SQL executed on table idmv_bw_current_links.

 

5.jpg

Again, the functions are to retrieve the value contained in the local job variable DELTA_NEW_LAST_CHANGE_NO_CURRENT_LINKS and also to handle delta runs.

 

Transfer Historical Attributes and Links

This is a “To LDAP” pass which makes a call to the VDS passing attributes for all the records fetched by the below SQL executed on table idmv_bw_old_values_and_links.

 

6.jpg

Again, the functions are to retrieve the value contained in the local job variable DELTA_NEW_LAST_OLD_ID_OLD_ATTRS_LINKS and also to handle delta runs.

 

End of Transmission


This is another “To LDAP” pass which actually triggers the process chain in BW system.

 

7.jpg

If StartChain attribute is set to <blank>, this will not trigger the process chain in BW system

 

Update Delta to Permanent Variables


This “To Generic” pass stores the local job variable value into a permanent job variable. Note that the value of the permanent job variables will be reference in the above “To LDAP” passes (in the subsequent delta runs) to determine if the job is being run as an Initial run or Delta run. Below code snippets shows how the value of permanent job variables are set and those of local job variables are cleared.

8.jpg

 

After the Initial load is executed, you should see some values against each of the permanent job variables as shown below

9.jpg

The “To LDAP” passes make a call to VDS. The BW Configuration in VDS would look something similar to the below screen. The MX_USERNAME is an actual user in BW system which will be used while executing the Web Service.

10.jpg

Hint: The BW connector by default collects 1000 entries, which it sends in one package to the BW system, in order to minimize transport communication overhead. If you would like to reduce the package size, you could use SUBMIT_SIZE in the constants to a lesser value.

 

Under User Group > Authenticated section, the bwuser user created is used for communication between Identity Center and VDS. The bwuser is provided in the repository (BW to VDS ) created in Identity Center

 

11.jpg

When VDS receives a request from Identity center, it in turn makes a Web service call to the BW system to trigger a process chain. The below screen shows an activated web service RS_BCT_IDM_CHAIN_START  which has been configured with a default binding.

 

12.jpg

 

In the BW system, two tasks would need to be done initially

 

 

  • A  Source System (Web Service Type) would need to be manually created to receive the Web Service call.  This has to be done manually in all your BW system in the landscape
  • BI Content for IdM Reporting must be installed and captured in transports

 

Below screen is the default setting (which does not need to be changed) of the Data source. It shows the web service connection in the Data Source.

13.jpg

You should be able to view the execution result of the Process Chain via RSPC transaction

14.jpg

 

Troubleshooting Tips:

 

1) One of the common issues is incorrect Host names and Port number being provided in VDS. Sometimes, in a large landscape, it gets complicated and the best way is to work with BASIS team to get these details. Another good way to find the values is to go to SICF transaction and navigate to /sap/bc/srt/rfc/ sap/rs_bct_idm_chain_start/<client>/rs_bct_idm_chain_start/default and test the service. This should launch a browser window with authentication prompt. Note the URL and also check if you can login with the correct user.

 

You might get an error message as shown below:

putNextEntry failed storingcn=948 236 30 1 ,o=bwconnector

Exception from Add operation:javax.naming.NamingException: [LDAP: error code 1 - (WS Client Data Source:1:Error while sending IDM data to BW web service. Original error message: (401)Unauthorized)]; remaining name 'cn=765 654 56 ,o=bwconnector'

 

2) Activation of services under /sap/bc/srt/rfc in SICF transaction.  Whenever you get into a new system, these services have to be manually activated (Note 1626311)

 

3) Note 1586820 - OutOfMemoryError when transferring data to BW (on MSSQL)

 

4) When you get the below error while running the job, check that you have an updated Java, JDBC and JDBC Driver version.  I spent nearly a month to resolve this issue simply by wasting my time looking for the issue elsewhere 

 

E:Exception from Add operation:ToDSADirect.addEntry ... failed with NamingException. (LDAP error: Initialization of LDAP library failed)

Explanation: Waited for LDAP response: timed out after 15000ms.

 

5) Ensure that the BW user executing the Web Service has correct access in backend system (Role: SAP_BC_WEBSERVICE_CONSUMER)

 

By default VDS does not give lot of information on the error message.Hence, in VDS navigate to Configure > Logging > Operation and set the Log Level to ALL or DEBUG. In the Identity Center, if you want more information in the DSE logs, select your dispatcher, go to Policy tab and set "Log Level" to debug and stack trace to Full trace. Apply the changes, generate the dispatcher script and restart the dispatcher.

Training as a prerequisite in IdM

$
0
0

I recently came across a scenario where Training requirements need to be factored in IdM while provisioning roles. Unlike GRC (using parameter 2024), IdM does not have any thing out of the box to support this. I thought I will share my thoughts as this might be helpful for others who have the same requirement. I have made it easier for beginners to understand.

 

Employees need to attend training before being granted a particular role. In real life, I don’t see this followed religiously at all times . The system should also be capable to support exceptions where a person should be granted access to a Role even without they attending a training. The training data (usually held) in some other system has to be fed into IdM

 

In my scenario, I have a flat file provided as an input from the Training system, which has the Employee ID, Course Name, Validity dates.

Next, one would need to map the relationships between a Training course and a Business Role. I have created a new Privilege for each Training course as shown below. I preferred to use a naming convention PRIV:TRAINING:<NAME>

 

31.jpg

 

Create a Business Role as shown below.

32.jpg

 

Under the visibility tab, add the Training Privilege as shown below and set the visibility to “Owner+Members”.

33.jpg

 

Also, make sure that the backend privileges (from the initial load) are grouped and assigned to this Business role.

 

Login as an end user for whom Self-Service is enabled to request a role.

 

34.jpg

The user would be allowed to navigate/search for Business Roles and assign them. The user would not be able to select Privileges. Notice that the user is not able to locate the new Business Role “ACCOUNTS_PAYABLE” as this user does not have the required Training privilege

 

35.jpg

 

This end user can go on an actual training course and this data would be fed into the IdM system via a flat file from the Training system. If for some reasons, this user wants access to this business role (without attending a formal training) based on your approval process in your organization, you could forward this request to your IdM Admin who could manually assign this privilege to this user via IdM UI as shown below.

 

36.jpg

After the Training privilege has been assigned, this user end should be able to search and assign this corresponding business role as shown below

 

37.jpg

 

This should provision all the privileges attached to the business role to the respective backend systems.

 

There could also be additional requirements where the Training courses have a validity period. Once the Training privilege expires, additional jobs need to be configured to remove the business role from the user.

IDM SQL Basics #1: Queries against the Identity Store

$
0
0

This is part one in a series of posts that are focused on database queries, troubleshooting and curiosities related to the SAP Identity Management Identity Stores and its contents. It will be focused on tables and views as they are in the 7.2 release, but some tips apply to 7.1 implementations as well. Most examples are using SQL Server but I'll try to throw in some Oracle in here as well. Do note that this is not an official guide and that official docs such as helpfiles, tickets, notes etc. are the no.1 source. I'm writing this based on experiences from support calls, implementations and from working in the IdM development team on the database schema.

 

Feel free to correct me, ask for additional examples and clarifications as I hope to keep this series updated with new information as it appears.

 

Planned entries:

Part #1: Overview of IDStore views, tables, queries & basic examples and some examples of performance impacts

Part #2: How to locate problem queries in your implementation

Part #3: Testing and improving queries SQL Server, Oracle, DB2

 

Part one starts with the very basic - Why Should You Care? Then focuses on some useful views and some queries that shows data from them, and some examples on how to use them.

 

Queries, if they work already why should you care?

 

There are queries everywhere in a typical IdM implementation, so what if a few of them are slow? Why should you spend additional time on custom scripts using a uSelect call? Typically once a query is written its ignored until something takes a long time to list in the UI, or times out, or it turns out that a go-live process will not finish in the allotted weekend because an initial load job isn't finishing in time, or the entries are not processed or provisioned in time.

 

If you have a conditional task, switch task or scripted uSelect call that takes 200ms to execute it will in best case be able to process 5 entries per second and effectively block whatever workflow events sit behind it. It gets even worse if this needs to be executed for many repositories. So when you're facing go-live or adding another application to the solution that brings in another 20.000 roles and hundreds of thousands of assignments this could quickly becomes a bottleneck you never saw during development and might not see during daily usage of the solution.

 

Views you should familiarize yourself with

 

Knowing where the data is available is crucial, and knowing a little of the data structure is good too. The frequently used views have variations around a base name (in bold) which indicate what kind of data the view has, and an extension to indicate if they contain active, inactive or active & inactive values and if they are basic/simple views that contain only references for link values or extended views that link in the referenced value mskeyvalues or displaynames. Extended information about the views are available in the help file and the training documentation.

 

  • idmv_entry_simple/idmv_entry_simple_all/idmv_entry_simple_inactive
    • Contains one row per entry, useful when needing only MSKEY,MSKEYVALUE, DISPLAYNAME, ENTRY TYPE or ENTRY STATE or similar
  • idmv_value_basic
    • Contains one row per attribute value per entry, only non-reference attribute values
  • idmv_vallink_basic/idmv_vallink_ext and other idmv_vallink_<variations>
    • Similar to idmv_value views it contains one row per attribute value per entry + reference values (MXREF_MX_PRIVILEGE/ROLE, MX_MANAGER etc)
  • idmv_link_basic_active/idmv_link_ext/idmv_link_simple_active and other idmv_link_<variations>
    • Contains only reference information, such as person to role/privilege/manager assignments.

 

These views do various amount of joining of data from the tables of the system, the most important ones being mxi_values, mxi_attributes, mxi_entry and mxi_link. There's rarely any reason to access these tables directly and they are usually not accessible for the runtime accounts anyway. If you're not interested in the underlying tables you can jump down to "A few basic IdM SQL Query guidelines" from here.

 

MXI_VALUES 

 

contains the non-reference values (used by value & vallink views) and some of the more interesting columns are displayed in this picture:

blog_mxi_values_example.png

From support calls show that the two following facts are either not known or quite frequently ignored:

  • AVALUE
    • This is the value of the attribute as entered, case & all intact, and is used when displaying the values
    • It has a maximum length of 2000 characters. Values that are larger than 2000 characters (such as pictures) are stored in the ALONG column
    • This column is usually named AVALUE or MCVALUE in views
    • THE AVALUE COLUMN IS NOT INDEXED
  • SEARCHVALUE
    • This is an uppercased copy of the first 400 characters of the contents in AVALUE
    • This column usually named SEARCHVALUE or MCSEARCHVALUE in views
    • THE SEARCHVALUE COLUMN IS INDEXED

 

MXI_ENTRY

 

Contains a single row per entry and is available through the views starting with idmv_entry_, and it holds some key information such as entrytype, entrystate, name, displayname, idstore, changenumber for easy and quick access. There are quite a few other columns as well but these are most commonly used.

blog_mxi_entry_example.png

In the 7.1 schema you had to do multiple joins to get this basic information about an entry, in 7.2 this table is a sort of meta-table for entries which is really useful to be aware of.

 

MXI_ATTRIBUTES

 

This table contain the attribute definitions, and all views that shows an attribute name has a join with this table. This table usually doesn't cause any problems and you rarely have any need to access it. Some of the most used/interesting colums:

blog_mxi_attributes_example.png

 

MXI_LINK

 

The MXI_LINK table is another new table in IdM 7.2. This contains all links in the system. A link is any reference between entries such as manager, role/privilege assignments, role/privilege hierarchy assignments etc. Some key columns are shown here:

blog_mxi_link_example.png

It also has support tables & views that are interesting. MXI_LINK_AUDIT for instance contains the full history of any link, when it was initiated, its complete approval history etc. This information is available through the idmv_linkaudit_basic and idmv_linkaudit_ext views.  As in IdM 7.1, references also show as attributes set on the user such as MX_MANAGER, MXREF_MX_PRIVILEGE and MXREF_MX_ROLE but this is done by views that join the MXI_VALUES and MXI_LINK tables. The MXI_VALUES table itself no longer contain these references.

 

 

A few basic IdM SQL Query guidelines

 

You should keep the following in mind when writing custom queries:

  1. The column named SEARCHVALUE is named as such because it is the column which you should search in
  2. Don't use AVALUE  = ' <something>', or worse AVALUE like '%<something>%' or AVALUE in (...) as any part of your queries
  3. See 1 & 2 a few more times
  4. Try to use the simplest most basic view available that gives you the information you need, use idmv_entry views when possible
  5. When using attributes with unique constraint there's no need for DISTINCT

 

Examples

 

Here are some common queries against the different views. You should try to use the simplest view possible when creating queries that are used in conditionals/switches or scripts.

 

  • idmv_entry_simple, listing all entries of a specific entrytype
    • select mcMSKEY,mcMskeyValue, mcDisplayName from idmv_entry_simple where mcEntryType='MX_PERSON'

 

  • idmv_vallink_basic, list all attribute values for a specific user with its mcmskey selected from idmv_entry_simple
    • select mcMSKEY,mcAttrName,mcValue from idmv_vallink_basic where mcMSKEY =  (select mcmskey from idmv_entry_simple where mcMSKEYVALUE = 'USER.BLOG.5')


  • idmv_vallink_basic, list all attribute values for a group of users where mcmskeys are selected from idmv_entry_simple
    • select mcMSKEY,mcAttrName,mcValue from idmv_vallink_basic where mcMSKEY in  (select mcmskey from idmv_entry_simple where mcMSKEYVALUE like 'USER.BLOG%')


  • idmv_vallink_basic, list all attribute values for a group of users where mcmskeys are selected from idmv_vallink_basic
    • select mcMSKEY,mcAttrName,mcValue from idmv_vallink_basic where mcMSKEY in (select mcmskey from idmv_vallink_basic where mcAttrName = 'MX_LASTNAME' and mcSearchValue like 'BLOG%')

 

Example performance impact from bad queries

 

With the data structure of IdM its possible to write queries in many ways and still produce the same/valid results. Part 3 will go into how I did these measurements. A common scenario we see in support is a query that lists all attribute & values for a specific entry shown as #1, where #2 lists a less expensive version, the bold text highlights the difference:

 

  1. select mcMSKEY,mcAttrName,mcValue from idmv_vallink_basic where mcMSKEY in (select mcmskey from idmv_vallink_basic where mcAttrName = 'MSKEYVALUE' and mcSearchValue like 'USER.BLOG%')
    1. Uses the idmv_vallink_basic view to find entries attribute named MSKEYVALUE and value like USER_BLOG%
    2. idmv_vallink_basic uses the MXI_VALUES table, has hundreds of thousands of rows in my test system
  2. select mcMSKEY,mcAttrName,mcValue from idmv_vallink_basic where mcMSKEY in (select mcmskey from idmv_entry_simple where mcMSKEYVALUE like 'USER.BLOG%')
    1. Uses idmv_entry_simple to look for entries with MSKEYVALUE like USER_BLOG%
    2. Idmv_entry_simple has one row per entry, total 10.000 rows in my test system

Both queries return the exact same results, but the load on the database server is very different. Even in a small environment with only 10.000 entries entries the load on the server for query #2 is ~2% that of query #1, and that's just by switching to a view representing the MXI_ENTRY table which is simply much less data and indexes to look through.

 

Really bad example

 

Just to finish that off, here's a quick comparison of happens when you break point 1 to 3 of the basic IdM SQL query guidelines where query 2 is the bad version of query 1:

 

  1. select mcMSKEY,mcAttrName,mcValue from idmv_vallink_basic where mcMSKEY in (select mcmskey from idmv_entry_simple where mcMSKEYVALUE like 'USER.BLOG%')
    1. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0
    2. Table 'mxi_entry'. Scan count 2, logical reads 4, physical reads 0.
    3. SQL Server Execution Times: CPU time = 15 ms,  elapsed time = 109 ms.
  2. select mcMSKEY,mcAttrName,mcValue from idmv_vallink_basic where mcMSKEY in (select mcmskey from idmv_vallink_basic where mcAttrName 'MSKEYVALUE' and mcValue like 'USER.BLOG%') -- VIOLATION OF THE BASIC IdM SQL GUIDELINES #1,#2 & #3, DO NOT DO THIS PLEASE
    1. Table 'Worktable'. Scan count 0, logical reads 0, physical reads
    2. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0Table 'mxi_link'. Scan count 2, logical reads 6, physical reads 0
    3. Table 'MXI_Attributes'. Scan count 1, logical reads 2, physical reads 0
    4. Table 'MXI_VALUES'. Scan count 1, logical reads 1170, physical reads 0
    5. SQL Server Execution Times: CPU time = 16 ms,  elapsed time = 299 ms.

 

As mentioned I will go into more detail about how to perform such comparisons and what these values are in part 3 of this blog. For now just assume that large values are bad, and you want as much as possible to be Scan, and as little as possible to be Logical reads.

 

The 299 millisecond elapsed time for statement 2 is not bad when run all by itself, but it is 2.7 times slower than the first query. We can also see that it needs much more logical read operations to complete and needs to access two tables. By comparing execution plans, SQL Server estimates query 2 be about 100 times more expensive to execute than query 1, and we can assume it would become a much bigger strain on the system under load than the alternative, especially with an increased number of entries.

 

Another advantages of using idmv_entry_simple vs. idmv_vallink or idmv_value is that one entry is guaranteed not to have more than one row, so there's no need to use distinct on the resultset.

 

Configuration examples and other useful hints

 

There are many common places where you write your own queries perhaps without considering the importance of them such as; source-tabs of jobs, conditional statements, switches and scripts. I'll try to provide some examples here.

 

Job Source Definition, Use Identity Store Option

This first example is a job that adds an additional approver to all privileges from a specific repository BQQ:

blog_job_source_use_identity_store.png

Destination:

blog_job_destination_use_identity_store.png

In this example the "Use Identity Store" option is checked on the source tab. This means that the query only needs to return a list of MSKEY's and the system will automatically retrieve the attribute values that are used in the destination pass with the %<attributename>% syntax from the list of entries in the source pass. So for each mskey in the source, it will retrieve the MSKEYVALUE attribute in this example. Keep this in mind and leave the source statement as simple as possible.

 

You could also do this without the Use Identity Store option by using

select mcmskeyvalue as mskeyvalue from idmv_entry_simple where mcEntryType = 'MX_PRIVILEGE' and mcMSKEYVALUE like 'PRIV:ROLE:BQQ:%'

in the source statement if the only dynamic value you need to use in the destination tab is the %MSKEYVALUE% attribute. You can offcourse return all values yourself if you want to, but please dont check the "use identity store" option in addition.

 

Conditional task

 

Conditional tasks expect 0 or 1 to be returned by the sql statement where 0=false and 1=true. Count combined with Sign is useful here as it returns 1 for any positive value and 0 for... 0, and -1 for any negative value.

 

  1. Good: select sign(count(mcMSKEY)) from idmv_entry_simple where mcmskey = %MSKEY% and mcEntryType='MX_PERSON'
  2. Not that good: select sign(count(MSKEY)) from idmv_value_basic where mskey = %MSKEY% and AttrName = 'MX_ENTRYTYPE' and SearchValue = 'MX_PERSON'

 

Again, using idmv_entry in query 1 is 1/3 the cost of using the idmv_value view in query 2 in my simple environment. Sign is not really required here though as there can only be one entry for one mskey in MXI_ENTRY and count would always return 0 or 1 unless something was really broken.

 

Switch task

 

Switch tasks expects a single value to be returned from the query it executes. Example switch on entry type using idmv_entry_simple view.

blog_switch_example_simple_entrytype.png

SQL SERVER - WITH (NOLOCK)

 

This probably should have its own post so I'll keep it short:

 

If you look through the framework and procedures you will see that there's an extended use of WITH (NOLOCK) when the solution is running on SQL Server. For a complete description and understanding of table hint I recommend your favourite search-engine, but the highlight is that the query you write will not issue shared locks for the rows its accessing and this helps to avoid deadlock situations and increases performance. The penalty is that you risk reading data that are rolled back, changed or deleted by other transactions. Most of what you write can use this without risk, but it doesn't hurt to think about what could happen if you get bad data out of your query.

 

If you're configuring a solution that needs to be Oracle/DB2 compatible you can use %NOLOCK% in certain locations (switch tasks & conditional):

SELECT avalue FROM idmv_value_basic %NOLOCK% WHERE attrname='MX_LOCATION' AND is_id = 1 AND mskey=%MSKEY%

The system will then replace %NOLOCK% with an empty string if it detects that its running on Oracle/DB2, and with WITH(NOLOCK) if its running on SQL Server.

 

Most of the scripts in the framework use the databasetype system constant to determine if the hint is used or not similar to this:

var nolock = "";

if("%$ddm.databasetype%" == 1) { //MS-SQL

          nolock = "WITH (NOLOCK)";

}

var sql = "SELECT mcMSKEYVALUE from idmv_entry_simple "+nolock+" where searchvalue like '"+filter+"'";

 

 

That's it for part 1. Comments & corrections (even on spelling) is appreciated!

Customizing Display and Search in IDM - Display

$
0
0

Something that comes up in every IDM implementation is the customization of User Display and Search options. In this Blog entry I’d like to cover customizing Display options.  I will handle the customization of Search Options later this week.

 

To start let’s take a look at an out of the box implementation of IDM. Note the Details Section.  It’s pretty basic; we don’t seem to know a whole lot more about the user. We see some attributes but what else can we display about them?

 

It would be nice if we could find out about this Tony Stark, who is he affiliated with?  Is he a good guy or a bad guy?  That’s the problem with these superheroes, we just don’t know. Let’s see if we can make some changes to IDM so that we can find out more. 

 

Image 001a.png

 

To start with, let’s take a look at the entry type for this type of IDM Object, which in this case is MX_PERSON.  Note that the techniques in this blog will work for most viewable Entry Types with little modification.

 

Image 002a.png

 

What we see immediately is that we can specify both a Display task and a Search task. For this blog we will concentrate on the Display task. Note if you do not have these tasks in your IDM configuration, SAP thoughtfully includes them in IDM. These can be imported from the drive where IDM is installed, typically: \usr\sap\IdM\Identity Center\Templates\Identity Center\SAP Provisioning framework\. The import file name is called Sample_Tasks.mcc.

Image 003.png

 

Now we can select the attributes that we want to display.

Image 004a.png

 

And they will be updated in the UI:

 

Image 005a.png

 

What do you know? Tony Stark is a Good Guy with The Avengers!

 

So you see in a few easy steps a Display task can be created and configured to show additional detail about users, and other Entry Types in SAP NetWeaver Identity Management. Next time it will be on to Search Customizations!


Customizing Display and Search in IDM - Search

$
0
0
Continuing my previous discussion of how to configure some display and search options in IDM, let's consider Search. We are all familiar with the basic search functionality which lets you search the Identity Store in general, but what if you wanted to give your IDM users the ability to search on specific attributes?  In this Blog, we will look into how to do just that.
To start, we will need to enable the default search task that is bundled with IDM. We start again, by accessing the MX_PERSON Entry Type, and selecting the appropriate search task.  Again for these blogs we will work with MX_PERSON, but will work for other Entry Types.
2-choosing search task.png
Once the Search Task has been chosen, the Entry Type it will look something like this:
3-changed entrytype.png
And this is how it will appear in the Web Based UI:1-deault search.png
However what if we want to be able to search by different attributes?  (That was the point of all of this, after all ) This is easily done by selecting the attributes you would like to be able to search on:
4-modified searchtask.png
Once that is done, you can then search the Identity Store using the Advanced Search functionality.  Note that as with Basic search, you can use wildcards to ease your search.  Note the listing of "Bad guys" here
5-modified searchtask in ui.png
Or a listing of "Avengers"
5-modified searchtask in ui-2.png
There's still some more to examine here, next time we will take a look at customizing the Search Results themselves!

Customizing Display and Search in IDM - Display Grid

$
0
0

So as I finished writing the first two installments in this blog series, I realized that there was one thing that I had not yet described how to customize, the search grid.  Let's take a look at the default grid when searching for users, as we can see, it's fairly basic.

1-default ui grid.png

As usual, this is configured from the Entry Type being displayed.  As in the case of the previous entries in this series, we are working with MX_PERSON.

2-default mx_person.png

Now what if we want to change things up a bit.  Again, very simple, just select the attributes that need to be displayed.  The Up and Down buttons will help set the attributes in the correct order.  Note that in this case, moving an attribute "up" in the listing will result in the attribute being displayed more toward the left in the search grid, "down" moves the attribute to the right.

3-modified mx_person.png

Here is our display. Note we can sort the results by clicking on the column headers. He we are sorting by Organization (MX_DEPARTMENT)

4a-modified MX_PERSON sorting on organization (MX_DEPARTMENT).png

Here's another listing sorted by description.

4b-modified MX_PERSON sortign on description.png

Hopefully this short series will help in the customization of your SAP IDM display.  I just noticed that IDM 7.2 SP8 was released to day, which should bring about some new UI elements.  Hopefully I'll be discussing that soon.

Repair failed/stuck pending assignments

$
0
0

Hello everyone,

 

   

I would like to share my experience with repairing the stuck or failed assignment in SAP IDM using the new stored procedures that are going to available
with IDM 7.2 sp8. For pre IDM 7.2 SP8 versions, the following database objects are to be manually created.              

 

 

Tables:  mc_problem_analysis, idmv_problem_analysis

 

Stored Procedures: mc_analyze_assignments, mc_repair_assignments.

 

 

Firstly, let me explain the problem I have faced. I have created a role under which there are number of member privileges when assigned to a user in IDM,
is always going to failed status. I tried to solve the issue, but was not able to fix it. Then approached SAP from where I got these database objects which I have added manually (since I am on IDM 7.2 SP7). In this blog I want to let you know the steps I have followed to resolve the issue.

 

Firstly, I have created the objects(table, view & stored procedures) mentioned above. The table mc_problem_analysis stored the records of the assignments that are failed or in pending status. And the view idmv_problem_analysis is the corresponding view.

 

The stored procedure mc_analye_assignments is the one which analyses the identity store and identify the assignments with failed or pending status and
push the information to  mc_problem_analysis table. The sp mc_analyze_assignments expects usermskey parameter. If the parameter value is '0' it performs the analysis for all the users in the identity store. If the analysis for a particular user has to be done then the mskey of the user has to be provided. After the analysis if done, look into the table mc_problem_analysis. The columns mcCategory helps to understand the current status of the assignment as shown below.

 

                if mcCategory is       1 - stuck pending

                                                 2 - Failed

                                                 3 - Rejected

                                                 4 - wait for master privilege

                                                 5 - stuck provisioning task

 

This helps to understand why the assignment is not successful. And the value in the column mcSolutionstrategy helps with the suggested solution to
resolve the issue. Following are the various values for mcSolutionstrategy

 

   if mcSolutionstrategy is 1 - Retry without provisioning.

                                         2 - Retry with provisioning.

                                         3 - Assign master privilege.

                                         4 - Clean up.

                                         5 - Finalize audit.

                                         6 - Delete from provisioning queue.

 

Once the corresponding assignment that has to be processed is identified, use the stored procedure mc_repair assignments which expects parameters
usermskey, repositories and operation. if usermskey is 0, it repairs all the assignments. Else, if you want to repair assignments for a particular user, provide the mskey of the user.  As of now the value of repositories if 0 and it is intended for future use.  And the vale of operation parameter could be anyone of the values from the   mcsolutionstrategy mentioned above. Mostly  it could be in 1,2,3,4.

 

   

In my case, when I tried to fix the issue by above steps, it dint help me initially as before i had tried with the above approach, I have done de-assignment and re-assignment for the same user with the same role for a couple of times expecting that it will be successful but failed. Because of which the sp mc_repair_assignments was not able to fix the issue for me.

 

   

Then I have done the clean up of the assignments using  mc_repair_assignments with parameter values 0,0,4 respectively. After that I have assigned the user with the same role which again went to failed status. Then, cleared the mc_problem-analysis table and executed the analyse assignments which pushed the assignments that are in status pending or failed status to the table.

 

After that, I have executed the repair assignments stored procedure with the appropriate operation value for that particular user and it was able to resolve the issue.

 

     

Thanks,

Krishna.

SAPUI5 and ID Mgmt - A Perfect Combination

$
0
0

For those who do not know: "SAPUI5is our HTML5 controls library, that SAP is using as the standard User Interface Control library in all their future applications that need a "consumer grade" user experience, whether it is on desktop, tablet or smartphone." See also following blog Is this cool or what???

 

With SP 8 for SAP NetWeaver ID Mgmt 7.2, it is now posible to use (beside the already existing Web Dynpro UIs), the new SAPUI5 based end user UIs. They are delivered out of the box and are based on a new OData REST API.

 

They include, among other features, the possibility to:

  • Maintain your own data like master data, security questions, or even upload a profile picture.
  • Request new or extend existing role assignments.
  • Approve role assignments (for Managers or Role Owners).

 

The installation guide can be found on SAP Help. Below, you can find some screenshots.

 

Read also in my blogs how to Write your own UIs using the ID Mgmt REST API and Adapt the ID Mgmt Web Dynpro UIs to your Enterprise Portal Theme.

 

 

Maintain your own data:

08-07-2013 15-24-10.png

08-07-2013 15-25-25.png

 

Request new roles:

08-07-2013 15-31-50.png

08-07-2013 15-33-19.png

 

Extend role assignments:

08-07-2013 15-33-56.png

08-07-2013 15-34-58.png

 

Approval UI:

08-07-2013 15-43-59.png

SP 8 for SAP NetWeaver ID Mgmt 7.2 Now Available

$
0
0

We just released the latest Support Package for SAP NetWeaver Identity Management 7.2.
SP 8 contains a number of enhancements and new features, including new end user interfaces for HTML 5, the new logon help functionality, a system copy function, enhancements for approvals, and an updated GRC provisioning framework.

 

How to do mass population of a Business Roles with privileges using txt file

$
0
0
  1. You will need txt file like this below with the Business Roles in the first column and the privileges that will be added to the BR in the second column.

1.png

 

 

     2. After you have created the txt file in the Job folder you should create a job to fill a temporary table with the data from the file.

2.png

  • First pass will read the data from a txt file and will store it in a temporary table
  • Second pass will read the data from a temporary table and the will add the privileges to the Business Roles

 

     3.Pass - Mass assign of privileges to Business Role will look like:

  • In the source tab will be a simple select that will returns Business_roles and Privileges
  • In the destination tab will have:

2.png

SAP IDM - How to provide access based on privilege.

$
0
0

How to provide access based on privilege.

 

In many real life scenarios we should provide access to a person based on his access in a certain system.

But how we should do that? Is there a simple way to accomplish this goal?

The answer is “Yes, there is.” And here is how I’ve done it.

 

First let’s assume we have following systems: SYSA and SYSB.

We also have one privilege part of SYSA: PRIV:SYSA:TEST_PRIV

We have one business role of SYSB: ROLE:SYSB:TEST_ROLE

And we finally have a person: TEST_PERSON

 

Create a new task: Attach BR in SYSB.

Add ToIdentityStore pass.

In Source Tab clear “Retrieve attributes from pending value” flag.
Set the destination tab as it is shown on following screenshot.

Capture2.PNG

 

Now go to privilege list in your idstore.

Capture.PNG

 

Select the privilege PRIV:SYSA:TEST_PRIV and open its properties.

Select the Task tab and in the field provisioning task link already prepared task: Attach BR in SYSB

Capture1.PNG

 

And what we will have as a result is, when the privilege PRIV:SYSA:TEST_PRIV is successfully attached to the person TEST_PERSON this will trigger attachment of ROLE:SYSB:TEST_ROLE to the same person. If by any reason attachment of the privilege fails, than the business role won't be attached to the person.

 

So we've achieved our goal, but there is one thing you should be aware of. The role ROLE:SYSB:TEST_ROLE should not contain the PRIV:SYSA:TEST_PRIV itself, otherwise you might end in an endless loop.

 

Of course you can use the same approach when you are removing the privilege from person and this way, if you removed the privilege the business
role will be also removed.

 

 

Best regards,

Ivan


unable to find valid certification path to requested target

$
0
0

Two months ago my team moved to Security organization and took over the responsibility on Identity management / SSO demo landscape. This is an
integrated solution intended to benefit sales/presales people in better presenting products and integration to customers.

 

 

The live demo shows typical identity lifecycle use cases in a heterogeneous system landscape using SAP NetWeaver Identity Management (IDM).

 

 

Using the example of the BestRun Demo Company, it presents how SAP NetWeaver IDM manages the identity lifecycle. The visual below
illustrates the demo system landscape

Untitled.png
 

 

Demo script itself covers five main use cases for Identity Management and integration with other SAP and non-SAP products. The demo is part from Solution Experience project which covers the most commonly used scenarios build on SAP software.

 

 

Our first main goal was to upgrade Identity Management to latest released version 7.2 SP8 and implement newly developed features.

 

 

One of the challenges we faced was the lack of newly issued SSL certificate for Active directory server used by Identity Management.

 

 

The use case was hiring of new employee in HR system presented by SAP ERP HCM and export to Identity Management. Identity management then is taking care to provision needed authorizations to the newly created user and to create users in Active Directory and other systems used in scenario.

 

 

During the provisioning procedure we were blocked with the following error message logged in job log of Identity Management.

 

 

PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target.

 

 

 

 

 

The whole stack trace was:

 

 

ToDSADirect.init got exception, returning false.
- URL:ldap://<host>:<port>
javax.naming.CommunicationException: cldvmxwi00041:636 [Root exception is javax.net.ssl.SSLHandshakeException:
sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target].

 

Thoughts, analysis, testing and work:

The first thing which came to my mind was to have a look at CSS and check do we have the issue already reported so I found customer's ticket 674366 2012 in CSS system. Unfortunately the root cause was not found but there were some advices for further investigation which I found as useful.

 

 

 

Solution:

1. Ensure that you have downloaded the latest issued server certificate from Active directory. In case you do not have RDP access you can download it using Open SSL

2. Increase the stack trace level of Dispatcher in Identity Management: Dispatcher->Policy->Java Runtime engine->Log level = Debug

3. Reproduce the issue, go to job log and identify java home

4. Import the certificate in the correct keystore as described in section 7.4.1 of SAP NetWeaver Identity Management security guide

Testing Dispatchers

$
0
0

I like indicators.  Back in the day, I was a huge fan of External Modems (I think I just dated myself ) I liked hearing the "negotiation" sounds when a connection was being made and then seeing the little lights blink as data "flew" over the wire.

 

That preference has never changed.  Any time that I can see raw data, I prefer it  Fortunately IDM gives us the ability to see the engine at work through the Dispatcher Test mode which I've discussed before.

 

Dispatchers can always be placed into Test Mode even if they are set as services.  Just make sure that you Stop the service first.

test btn.png

 

Once you do this a Command Shell Window (or as we said way back when "DOS") that looks something like this:

 

basic disp.png

 

Don't worry that the display doesn't change, that's normal. It will change soon enough. The following screenshot shows what happens when a job is run:disp in action.png

So you can see that things are definitely happening now! To see more or less, adjust the Log level in the Dispatcher properties:

 

debug options.png

Enjoy testing your Dispatchers!

Views in IDM 7.1 and 7.2

$
0
0

Does anyone have any useful links to websites which lists the views available in IDM 7.1 and 7.2 with reference to the fields and and what they are used for?

Preventing Privileges being removed after an Initial Load

$
0
0

This blog is relevant to Version 7.2 SP7.

 

 

While it may valid for other versions but you'll need to investigate yourselves.

 

 

The Issue: When users are synchronised to ABAP they loose their original roles and profiles

 

You will probably find that, after the initial load, everything looks fine.  However, when you first synchronise a user back to the ABAP (or ABAP Business Suite) system, they lose all their roles and profiles.  Then comes the wailing and gnashing of teeth.  If you were careful, it's closely followed by the restoration of your user store and a trip back to the drawing board.  This has been a problem for a while and people have come up with a variety of ways around it.  Below  is mine.

 

The Cause: sap_abap_getNameOfAssignedPendingPrivileges

 

The issue is this script. If you go through it (as I did), it basically comes down to an SQL statement that retrieves the privileges that should be assigned to the back-end system.  It works fine in all cases EXCEPT when dealing with 'initial load' privileges.  The offending SQL is:

SELECT DISTINCT privilegename.mcMSKEYVALUE
FROM idmv_value_basic_all repositorynames
INNER JOIN idmv_value_basic_all privilegetype ON privilegetype.mskey = repositorynames.mskey
INNER JOIN idmv_entry_simple privilegename ON privilegename.mcMSKEY = repositorynames.mskey
INNER JOIN mxi_link assignment ON assignment.mcOtherMskey = repositorynames.mskey
WHERE assignment.mcThisMskey = <mskey>
AND assignment.mcLinkType = 2 AND assignment.mcLinkState IN (0, 1) AND assignment.mcExecState IN (1, 512, 513)
AND assignment.mcAddAudit IS NOT NULL AND (assignment.mcAddAudit > assignment.mcValidateAddAudit or assignment.mcValidateAddAudit IS NULL)
AND repositorynames.attrname = 'MX_REPOSITORYNAME' AND repositorynames.SearchValue = '<repositoryName>'
AND privilegetype.attrname = 'MX_PRIVILEGE_TYPE' AND privilegetype.SearchValue = '<privilegeType>';

 

Simple.

 

 

 

However, during the initial load, the following siituation occurs:

 

Roles:

A role is imported with mcAddAudit of -1 and mcValidateAddAudit of -1.

This means that the role will never satisfy the SQL criteria: assignment.mcAddAudit > assignment.mcValidateAddAudit

Profiles

 

Profiles have a different issue.  They are imported with mcAddAudit = NULL and mcValidateAudit = NULL.  You can see immediately the problem here:  assignment.mcAddAudit IS NOT NULL

 

Not going to get past that gate any time soon.

 

The Solution: Modifying Initial Load and updating sap_abap_getNameOfAssignedPendingPrivileges

 

 

Roles

 

 

The first problem, roles is easy enough to fix by changing the SQL in sap_abap_getNameOfAssignedPendingPrivileges.  You simply need to change:

 

 

assignment.mcAddAudit > assignment.mcValidateAddAudit 

to

assignment.mcAddAudit >= assignment.mcValidateAddAudit 

Given that you'd be strugging to have an assignment with the same add audit value as validate audit value, this should cause no issues.  It allows the -1 = -1 to pass the SQL and therefore Roles will not be kept in the back end.  Don't do it yet - there's more changes to come.

 

 

Profiles

 

Profiles are a little more tricky.

 

 

I got around it by setting the Process Id.

 

 

In the Initial Load job for ABAP or ABAP Business Suite, locate the pass:  WriteABAPUsersProfilePrivilegeAssigments

 

 

Change the Destination to include a process info tag:

 

 

MXREF_MX_PRIVILEGE {A}{ProcessInfo=InitialLoad}<PRIV:PROFILE:%$rep.$NAME%:%profileAssignments%>

 

ProfileAssign.png

Now we need to modify the script to cater for this.  I broke the original single SQL assignment up to make it easier on myself.  Its included below:

 

var sql = "SELECT DISTINCT privilegename.mcMSKEYVALUE \ FROM idmv_value_basic_all repositorynames " + nolock + " \
INNER JOIN idmv_value_basic_all privilegetype " + nolock + " ON privilegetype.mskey = repositorynames.mskey \
INNER JOIN idmv_entry_simple privilegename " + nolock + " ON privilegename.mcMSKEY = repositorynames.mskey \
INNER JOIN mxi_link assignment " + nolock + " ON assignment.mcOtherMskey = repositorynames.mskey \
WHERE assignment.mcThisMskey = " + mskey + " \
AND assignment.mcLinkType = 2 AND assignment.mcLinkState IN (0, 1) AND assignment.mcExecState IN (1, 512, 513) ";

 

 

sql = sql + "AND ((assignment.mcAddAudit IS NOT NULL AND (assignment.mcAddAudit >= assignment.mcValidateAddAudit OR assignment.mcValidateAddAudit IS NULL)) or (assignment.mcAddAudit IS NULL AND assignment.mcProcessInfo = 'InitialLoad')) ";

 

 

sql = sql + "AND repositorynames.attrname = 'MX_REPOSITORYNAME' AND repositorynames.SearchValue = '" + repositoryName + "' \
AND privilegetype.attrname = 'MX_PRIVILEGE_TYPE'  AND privilegetype.SearchValue = '" + privilegeType + "'";
This will ensure that it finds any profile that has the Process Info set to 'Initial Load' if the mcAddAudit is NULL.  Once the privilege has been 'touched' by the identity store, it'll get an mcAddAudit set and the original script would have worked.

 

There's a whole reason why you can't just search for mcAddAudit is NULL (it could be waiting on a task or approval) which means you need to set some other identifier to make sure that your initial profiles are kept.

 

There's probably a number of other ways of doing it.  This works for me and has been tested successfully.

 

Hope it helps.

 

 

Update: Bulk uploads and Reconcile - the next adventure.

 

So it appears that bulk uploads of profile assignments have the same problems.  My fix disappears as soon as the first Reconcile process comes through and sets the mcProcessInfo to 'Reconcile'.  You only need to do this if you bulk upload profile assignments.  Doing them through the UI works fine.

 

My new sql line is:

 

 

 

 

sql = sql + "AND ((assignment.mcAddAudit IS NOT NULL AND (assignment.mcAddAudit >= assignment.mcValidateAddAudit OR assignment.mcValidateAddAudit IS NULL)) or (assignment.mcAddAudit IS NULL AND (assignment.mcProcessInfo = 'InitialLoad' OR  assignment.mcProcessInfo = 'Reconcile')))";

How To Assemble Flexible Business Roles

$
0
0

How To Assemble Flexible Business Roles

 

Assembling business roles is not a simple task. There are many things to be considered in business roles design.

 

What is the purpose of the business role?

Why we should use it instead of assigning direct privileges in all the systems?

 

Well the answer is pretty obvious here. The business role gives us opportunity to manage any set of direct assigned privileges regardless of the system, like single object. Handling single role is more easier, than handling hundreds of privileges in different systems.

 

Ok, but then someone may say, "Yes but we are loosing flexibility this way?"

 

And, yes, he might be right, but only if we didn't built our business roles in a proper manner.

 

Ok, but how we can keep a small number of roles and still achieve a flexibility?

 

Well this is the question I'll try to answer with this blog.

 

There are basically to ways to have small numbers of business roles and still keep them flexible enough.

 

Using business role inheritance

This method is suitable for companies where the persons may be separated in several areas with several subareas where the subareas inherit all access rights of all parent areas. For an example see the following picture.

Capture.PNG

As we can see each position inherits access rights form the lower one. In this case we can assemble following business roles.

Capture1.PNG

This way the MANAGER BUSINESS ROLE inherits all assigned privileges from STAFF BUSINESS ROLE and BOSS BUSINESS ROLE inherits all privileges from MANAGER BUSINESS ROLE and STAFF BUSINESS ROLE.

 

As a result we may now assign only one business role per position.

Capture2.PNG

This way is much easier than assigning for a BOSS position 3 separate business roles, if inheritance were omitted and it is definitely easier than assigning all 7 privileges directly, if business roles were omitted.

 

Of course this is very simple scenario and in real life scenarios you will have more than one basic business roles and each business role might inherit more than one of its kind, but I'll stay with simple scenarios in sake of easy understanding.

Using contexts

This method is suitable for companies that does not comply with previous method of inheritance, but the access rights might be separated in groups based on some keys (i.e. position, country, location, etc.)

Capture.PNG

As you can see here we have completely different situation. There is no inheritance of privileges between positions, so the question is:

 

How much business roles we will have in this case?

 

Well the answer is 3 business roles and this is not a typo. Let's see how this might be accomplished.

Capture1.PNG

As you can see the business roles are 3 and they contain all the privileges that one person on STAFF position might have for example.

But the question here is:

 

How we will assign exactly and only these privileges the person on STAFF position in US PLANT needs?

 

Well the position (STAFF), country (US) and location (PLANT) are actually attributes of MX_PERSON, so we have these information for all persons that we should provide access rights for. After we have that information the assignment is easy.

Capture2.PNG

According to person's attribute (position) the business role is selected and attached to the person with contexts defined from person's attributes (country and location). As a result only privileges that are matching all these conditions will be attached to the person.

 

As a conclusion it is also possible to use mixture of both methods if you have a complicated case that fits, but basically these are most common scenarios for building small in numbers and flexible business roles.

 

Hope I've been helpful, but if you have any other suggestions or questions, please feel free to post them here.

 

Best regards,

Ivan

Viewing all 170 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>