Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Option 2 - Stage Changes in a Test Environment

For more complex data updates (such as entering documents which have many affected tables), you can dedicate a (or bring up a special) test environment for the data entry. Then, bring down the instance and extract the updated data using a database tool. You then replay that data into the OLEDBA schema.

Option 3 - Special Batch Jobs to Load Data

This is a trick used by a couple modules within the KFS application. They have special batch steps which generate data in the test environments at some point after server startup. At the end of the generation process, they set a system parameter which lets the job know that it has run. This prevents it from running in the same instance in the future.

The purchasing module has a job which builds a large number of purchasing documents for the purpose of having something in there for users to test against. In their case, it is building them with hard-coded data. But, there is nothing saying that such a job could not be reading something else and creating the documents. (Such as a CSV or XML file.) This data could either be checked into the project itself so that it is compiled on the server. Or, the job could load it from somewhere else. (directly from SVN, in a new subdirectory in the ole-cfg-dbs project.

As an example, the configuration below is in spring-test-env-beans.xml, a special spring file which is only included in testing environments. It defines a new job with a trigger that runs it 5 minutes after server startup.

Code Block
langxml
titleData Loading Batch Job Trigger

    <bean id="purapModuleConfiguration" parent="purapModuleConfiguration-parentBean">
   		<property name="jobNames">
			<list merge="true">
				<value>purapMassRequisitionJob</value>
			</list>
		</property>
		<property name="triggerNames">
			<list merge="true">
				<value>purapMassRequisitionJobTrigger</value>
			</list>
		</property>    
    </bean>
	
    <bean id="purapMassRequisitionStep" class="org.kuali.kfs.module.purap.batch.PurapMassRequisitionStep" parent="step">
	    <property name="documentService" ref="documentService" />
	    <property name="requisitionService" ref="requisitionService" />
	    <property name="purapService" ref="purapService" />
	    <property name="boService" ref="businessObjectService" />
	    <property name="psService" ref="persistenceStructureService" />
    </bean>
    
	<bean id="purapMassRequisitionJob" parent="scheduledJobDescriptor">
		<property name="steps">
			<list>
				<ref bean="purapMassRequisitionStep" />
			</list>
		</property>
	</bean>
	
	<bean id="purapMassRequisitionJobTrigger" parent="simpleTrigger">
		<property name="jobName" value="purapMassRequisitionJob" />
        <property name="startDelay" value="300000" />
        <property name="repeatCount" value="0" />
	</bean>

Non-Option 4 - DON'T TO THIS - Point an instance to OLEDBA

I put this one here as a caution. Many operations in KFS have data side affects which you do not want to capture in your master data source. (Additionally, the Rice tables are missing from that schema, so it's not possible without setting up a separate Rice server.) You want to have precise control over what you are putting into the master data source.