Showing posts with label CNC. Show all posts
Showing posts with label CNC. Show all posts

Monday, June 5, 2017

Auditing JDE: Column for Scheduler Server Status

In JDE the scheduler server is identified by the control record in the SYSXXX.F91300 table with a *SCHEDULER value in its SJSCHJBMN column.

The status of the scheduler server, as in, its active or inactive status, is represented in the SJSCHCTCD01 column.

 SJSCHCTCD01 has 000 if the scheduler server is down.

It can have the following values if the server is up and running:

111 - neither the job launcher nor the job monitor is paused

011 - the job launcher is paused but the job monitor is not paused

101 - the job launcher is not paused but the job monitor is paused

001 - both the job launcher and the job monitor are paused

Thursday, December 8, 2016

Auditing JDE: Correcting Production Data and being Sox Compliant

To err is human. To update wrong data in the wrong field at the wrong time is the hallmark of every business analyst's career.  More often than not the responsibility to correct this data falls on the head of the IT team and for that they need UPDATE ACCESS TO PRODUCTION!

This brings us to the nightmare scenario of a developer who may need update access in production to make updates as and when needed. Even though this scenario is legitimate and the data needs to be updated, a SOX auditor would not agree to the fact that a DEV has update access on production data. To make the auditor happy and maintain his faith in the integrity of your financial data you would need to design a process that not only helps you get the job done, but also keeps you Sox Compliant.

Let's examine one such process:

  1. For this to work perfectly you would need the Help-desk team, the dba team and the application support team.
  2. Let the dba team design a functional account on the database with update access and self expiring password. The account should be such that the password, once generated, should expire in 4 hours
  3. Add all your support team members who may update data in prod to a list which captures responses for few secret questions personalized for each member of the team. 
  4. Share this list with help-desk team. They should be trained to challenge the caller with the secret question and tally their response before forwarding their request for password to the dba team.
  5. Once a data issue is identified, the Business Analyst raises a High Priority incident in the Incident Management System.
  6. The Incident gets assigned to the app support team, who then raise an Emergency Preventive Change Order to update the data, citing the Incident that was just created by the Business Analyst.
  7.  Once the CO is raised, the app Support team member calls the Help-Desk to get the password for the functional account.
  8. The Help-Desk team co-ordinates with the DBA team to get the password generated and then share the same with the app support team member.
  9. Once the password is generated, the app support team member updates the data that was requested in the INC, closes the INC and closes the CO, leaving the password to expire.
  10. This maintains chain of control over the account being used to make the update and also makes sure the account was used for intended purposes.

This 10 step plan, not only maintains data integrity but also helps document each update , commonly referred to as back-end update to the database.











Wednesday, December 7, 2016

Auditing JDE : Let's look at the Users

I was recently asked by a peer about F00925 and it's value in JDE audit. The specific question was about a particular column on it and why and when that column populates.

When I answered that question for him, it occurred to me that may be this is something that a lot of auditors and CNC's alike would like to know about.

A typical Test of Design document for JDE talks about getting extracts from the following tables to get to know about the User population of a particular JDE setup:

F0092 - Library Lists - User
F0101 - Address Book
F98OWSEC - One World Security
F00950 - Security WorkBench Table

The general idea is that if you have records in the User profile Table (F0092) and can match it with the Address Book (F0101), you would be able to judge the access a user has based upon the roles in the security work bench (F00950). F98OWSEC, is used to judge whether the user is active, when was the security last updated along with the frequency of password change set in the system.

In theory the above practice would give you enough opportunity to look through any unscrupulous access a system may have provided to any given user or a role. However, JDE is a much more nuanced system to be judged based upon just 4 tables. They may be able to lead you to the particular set of fields in a given table a given user may or may not have access to, but you would not be able to answer what all a given user can see, based upon data in just the above four tables concerned.

For that level of knowledge, you would need to have an idea of the normalized tables in the Address book as well as the Control tables that are involved in the setup of the Address Book. The default Address Book application provides 30 category codes. These codes can be altered via the UDC application to point to various aspects of business, like Profit Centers, Customer type, etc. For all of this to make any sort of sense the following list of tables need to be extracted from the JDE setup:

F00924 - User Install Packages
F00922 - User Display Preferences Tag File
F00921 - User Display Preferences
F0093 - Library List Control
F98OWSEC - One World Security
F00950  - Security Workbench Table
F00925 - User Access Definition
F95921 - Role Relationships Table
F00926 - Anonymous User Access Table
F9005 - Variant Description
F9006 - Variant Detail

F0092 - Library Lists - User
F0005 - User Defined Codes
F0004 - User Defined Code Type

 Armed with the above tables you can now determine any nook and corner that a user of interest may have access to in the JDE system. It still will not tell you whether the user gets powers to change statuses of projects etc, buts that's topic  for a different blog post :)











Monday, August 11, 2014

TAMTOOL: Slow Checkin and Get in OMW

One of the major issues with OMW use over a period of time, is deteriorating performance of Checkin and get methods.

I have seen systems where the screen almost freezes white when you try to do a Get.

This is caused by empty spaces in the spec files that may come in due to constant add and delete of data. These files can be defragmented using the TAMTOOL.exe to eliminate this crawling checkin and get.





To get an idea of the amount of free space that's there, just run the following from the command prompt :

c:\b7\system\bin32>tamtool -countfreenodes -a -p c:\b7\dv7334\spec\



If there are a lot of free nodes in the spec files, following would eliminate them:

c:\b7\system\bin32> tamtool.exe -clone c:\b7\DV7334\spec\gbrspec.xdb c:\b7\DV7334\spec\gbrspec.ddb c:\b7\DV7334\gbrspec.xdb c:\b7\DV7334\gbrspec.ddb -copydata


This should eliminate the performance issues related to Get and Checkin being slow.

I have only have to do this once in my last 8 years of being a CNC. Only did it on ERP 8. Will update the post when i get a chance to try this on the newer versions.


Monday, January 14, 2013

JDE Scheduler: Delete Jobs

One of the most common questions asked in JDE forums is why can't we delete jobs from the JDE Scheduler? There is a delete button out there that only throws an error message when pressed!

Well things are not always as simple as they look. And if they were, we poor CNC blokes would have appeared less crazy than we do :)

Here's what actually happens. Each Scheduler job has two records that are stored in the header (F91300) and the details (F91320) tables. So when you try and delete a job from the header, you run the risk of leaving a detail records that has no header and can never be launched. So JDE prevents this from happening by throwing the error message that you get, when you try and delete a Scheduled job from the P91300 application directly.

The correct way to delete these jobs is to run R91300B and then delete the records using P91300. R91300B does not clear the header to leave the audit trail. If you want to clear them, feel free to clear using the application.




Thursday, June 7, 2012

JDE Package Build: Pro Tip

Build your Production Full Packages on QA or Dev Enterprise Server ------- Yes you read that right! And no I am not smoking anything suspicious.

Seriously, there are more than one benefits of doing this. Before I actually go ahead and enlist the benefits, lets take a look as to how we can set this up. You now know that all you need to do for JDE system to recognize a server or machine is to update in a certain table somewhere. This will be no different! Lets see what all tables we need out here.

When we assemble a package , we select the server we want to build the package on based on the entries in F98611 (DataSource Master) Table. So that's where we need to change. Now, we have two DataSource Master Tables, one in the Server Map and one in the System data source. Here we will do the changes in F98611 in the System DataSource because thats where my System looks into for the server info.

If you have the same "SYSXXX" the entry for all your enterprise servers should already exist there and you will not need to configure anything separately. But if your SYSXXX for production is separate you will need to insert the QA server name in the table. This is what you will need to insert in the SYSXXX.F98611 table:

insert into sysXXX.f98611
select omenhv, 'NEWSERVERNAME' omdatp, 'newservername' omsrvr, omdatb, omoown, omdllname, omll, omlib, omomui, omomto, omomds, omomjd, omomcc, omdstp, ompid, omdatuse, omuser, omocm1, omjobn, omocm2, omupmj, omocm3, omupmt, omocma, omocmb, omocmc, omocmdsc from sysXXX.f98611
where omdatp ='NEWSERVERNAME' and omsrvr = 'newservername'

Insert, commit and restart your EOne services for the changes to take effect. Unless you restart the services the entry will not show up on the Server Selection Screen in the Package Assembly application. No points guessing why we need to bounce the services for the entry to show up in that table: Its a bootstrap table.

Now you can select your QA server when you are building a PD full package and the build happily runs on the QA Enterprise server. Yaay!

But wait! What happens after the build completes. You can not deploy a package built on the QA server onto the PD enterprise server, Can you?

Of course, not!! But there is something else we can do. Go onto your Deployment server, under the PACKAGE\packagename dir you will have a folder by the name of the Operating system of your PD enterprise server. Inside this you will have a "enterpriseservername.inf" file. Go ahead and open it.

It will look somewhat like this:

[SERVER PACKAGE]
PackageName=PD7334FQ
Type=FULL
Platform=SUN5.10Generic_139555-08
BuildMachine=qaentsrvr
BuildPort=6010
SPEC=1
SpecList=0 , 1 , 10 , 13 , 14 , 15 , 16 , 17 , 18 , 2 ,
BSFN=1
CAEC=1
CALLBSFN=1
CBUSPART=1
CCONVERT=1

You will need to update the name of the file to "pdentservr.INF" and change the BuidMachine to pdentsrvr. Save and exit.
There may be a case where you are running two different versions of OS on you enterprise servers. If that be the case, you will need to update the Platform accordingly. SSH into your PD enterprise server and run "uname - a " to get the OS name and just update it in the inf. This takes care of the client part of the build. Fun is at the server part.

For a package to be deployed the package has to exist in the /jdedwardsoneworld/exxx/packages dir of your enterprise server. Now, that we have build the package on the QA Enterprise server we need to send it over to this dir on the PD Enterprise Server. Make sure you have "sftp" enabled between the two servers and libraries for "tar" utility are installed on both of them. Here's how you transfer your package beween QA and PD Enterprise servers:
  1. ssh into your qa ent server
  2. go into the /apps/jdedwardsoneworl/packages dir using cd command
  3. run this tar -cvf  temppkg pdpackagename
  4. this creates a zip file called temppkg for your full package "pdpackagename"
  5. sftp pdentsrvr
  6. login with your passwd... its mandatory that you have the same userid owning all the jde files and folders on the two servers.
  7. go into the /apps/jdedwardsoneworl/packages dir using cd command
  8. run this ... put temppkg
  9. Once the copy completes login to your pd ent server
  10. go into the /apps/jdedwardsoneworl/packages dir using cd command
  11. run this... tar -xvf temppkg
  12. Voila .... you have your pdpackagename dir in the /apps/jdedwardsoneworl/packages dir
Now your pd package is ready for deployment. Oh my goodness!! So much effort just for Package build and deployment!!

Well, a lot of this is just one time setup.And it sure has a lot of advantages. Its time we see what they actually are. Here you go:

  • You can run a PD build during business hours. No night outs!!<Happy CNC>
  • Ube's running on the PD ent server will not affect the speed of build. <Happy CNC>
  • Scheduler server can continue to run while the build is ON and you can just stop it when you want to deploy the package <Happy Suits>
  • Your setup complies to the SOX control which abolishes installation of compilers on Production machines. <Happy Suits>
So two reasons to get a Happy CNC and two for the Suits, I say there can never be a more Win Win in a single act!!

Jokes apart, compilers on production servers are indeed frowned upon in the SOX parlance and there exists a way where we can avoid this completely in JDE. This also helps if you need to trouble shoot a build for anyreason, it can be done whenever needed with no affect to the business whatsoever.

I know this has become one long post and you may have questions about the approach. Well that's why whe have the comment section there. Feel free to ask away....


Jd Edwards by Donatienne Ruby, Christabel [Paperback]
Find us on Google+

Tuesday, May 8, 2012

Configure JDE Development on Citrix

The Citrix platform provides an easy base for setting up development machines to support development in JDE from offshore. The fundamental logic behind setting development on Citrix is that all the users/developers accessing the environment will have a independent Pathcode created.

Since the default directory for a user resides under C:\Documents and Settings there is a need of changing the default user path. Incase of a shared citrix server it becomes mandatory that a new variable be defined at the system level which can then be used to define the default user path. In the script below the variable
%userpath% is the system variable which maps the user to D:\Users. This is also a good practice because the citrix login will copy the Pathcode over for each user hence mapping it to D: drive will save a lot of space on the System drive C:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@echo off
net use J: /del.
net use J: \\deploymentserver\b7334
if exist %userpath%\%username%\b7\system\bin32\activConsole.exe goto startow
mkdir %userpath%\%username%
@echo Copying standard OneWorld Development Client to your profile...
@echo Please be patient - this takes up to 20 minutes and only occurs
@echo on the first use.....
mkdir %userpath%\%username%\b7
echo Copying authorization files.....
xcopy d:\b7\*.* %userpath%\%username%\b7\*.* /s /e /d /h /y
:startow
copy c:\windows\jde.ini %userpath%\%username%\windows\jde.ini /Y
@echo Starting Oneworld....
start %userpath%\%username%\b7\system\bin32\activConsole.exe
:end
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Save the script as ExecuteJde.bat and publish it via Citrix. The first time a developer executes it from the farm, it creates a B7 dir for the dev's login it under D:\Users and that becomes the patch code the dev uses every time he logs in.

In case of full package builds, just make sure all the local packages are deleted before the full package is installed on the system.

It is preferred that the users have a local profile as opposed to the roaming profile.

Tuesday, March 6, 2012

Sizing: JAS performance and DOM

Client work stations play a huge role in the performance benchmarking of any JAS architecture! This statement befuddles many people who view JAS as a sole entity entirely dependent upon the web server and the available bandwidth for performance. If type of processor on the client machine is mentioned in the recommendations list, there's a reason for it !!

Document object Model or DOM has a huge role to play in the working of EnterpriseOne JAS. DOM is a set of conventions for interacting with the objects in the HTML, XHTML or XML documents.  DOM presents HTML document as a tree structure. Objects in this tree can be accessed and manipulated using methods on them as specified in the DOM APIs.

Now, EnterpriseOne heavily relied on DOM objects for its functioning. The grids, for example, are stored as DOM elements on the client PC. These elements are processed and rendered by Java Script running on this client PC. So the entire time required to fetch data into a grid is spent on the local machine in processing these DOM elements rather than on the Web Server. Clearly larger the number of columns in the grid larger will be the number of DOM elements and hence greater will be the time to fetch, So now, if the grid columns are fixed and the database type is constant, its clear that a dual core pc will take far less time as opposed to a single core one in fetching records in a grid.

This is the main reason of recommending a level of hardware architecture for the local workstations when sizing for a new EOne implementation.






Tuesday, February 14, 2012

TAM Spec and XML Spec : Musings

Spec is short for Specification, which together with the C components form the Central Objects in EnterpriseOne.

Format in which spec is stored saw a change after E1 8.12. Pre 8.12, specs were stored in the TAM format and now we have the XML format.

TAM or Table Access Management was a proprietary format of storing the details of the object specifications. The package build process would build separate TAM files for each central object table (.ddb and.xdb files). This will then have to be generated using enegerator on dedicated machines to get the java code that ran on the html server.

Once XML specs were introduced from e1 8.12 onwards, these specs could be generated 'On Demand' by the web server. The specs got stored in databases instead of TAM files. The build now creates a DBMS table for each spec file. For the client packages these tables get stored in the local databases on the Deployment server and the client workstation whereas for the server packages they get stored in the relational database of the E1 system.

One specialty of these spec tables is that they do not belong to the OL datasources and hence can not be queried from the E1 side. They can however be viewed by using the utilities like the Enterprise manager from the DB side.

With the change in spec types a new feature that came in is that now the client spec and server spec are identical in the system. Previously FDASPEC, FDATEXT and SMTTMPL did not use to get copied to the enterprise server during the package build but now they are.

Here's hoping some doubts about the two buzz words in E1 got clarified for readers of EOneDuniya!