Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3
Deck
idKuali OLE

User Stories

Priority Rank Order

Card
labelOverview
Section
 
Column
width50%
 

Description

Batch operations are defined and the architecture is laid out in this section. Defining a batch operation begins with the definition of the batch profile and involves definition of various sections that cover different aspects of the batch job. Apart for defining the batch profile, the user may also define specific KRMS profile xml files, various maintenance documents for data mapping and define CRON schedules.

 

The execution of the batch job varies based on options selected in the profile. Current implementation of batch jobs covers ingest of bib, order, invoice, location and patron data as well as bib overlay and export of data including bibs, order and invoices. Other data operations such as deletions, 'Globally Protected Field' definition and overlays may also be covered in this definition.

Card
labelDesign: Business

 

 

Card
labelDesign: Technical

 

Design Technical
Service Design and Implementation
Screen Flow Diagram

Layout use cases based on the screen flows indicating navigation between different screens and pop-ups. 

Security
  • Roles – list of roles to be defined
  • Permissions – list of permissions to be defined
    Rules
    • Context & Agendas
    • Terms & Prepositions
    • Sample XML file
    Component Design
    • OJB.xml
    • Class Diagram

  • User Profiles – user profiles that may be applied roles/permissions (also for test/mock purposes) 
  •  

    • State Chart Diagram
    • Psuedo Code
    • User creates a batch profile using the options provided in batch profile screen
    • User
    schedules
    • create a
    batch job for a given data and time
  • BatchProcessJobManager will perform the job scheduling by calling scheduleJob(ScheduleJobBo bo) method. The schedule information (storeds as cron expression) and the job information is stored in db
  • The Quartz scheduler which is configured in KRMSLocalSpringBean.xml calls the BatchProcessManager.execute(int processId) method.
  • BatchProcessManager will load the profile(BatchProcessProfile) created by user based on the process id supplied and call the factory class to create BatchProcess instance.
  • The BatchProcessFactory creates the appropriate BatchProcess instance based on the processType in the profile
  • The processBatch(BatchProcessProfile profile)
    • batch process through batch process screen by selecting the profile to execute the output file, batch size and email id for the report
    • User clicks on run now to execute the batch immediately. 
    • OLEBatchProcessDefinitionDocument creates the process definition document and call s the OLEBatchSchedulerService, which starts the job by executing the OLEBatchProcessAdhocStep step 
    • In OLEBatchProcessAdhocStep the BatchProcessFactory is invoked with the process type based on which the appropriate process is created
    • The processBatch(OLEBatchProcessDefinitionDocument processDef,OLEBatchProcessJobDetailsBo jobBo) method is called to perform the batch process as per the profile information provided by the user.

      1. BatchProcessExportData
        This batchprocess is involved in performing export of bib / intance information based on the profile provided by the user. It has the following methods:
    • batchExport(BatchProcessProfile profile) 

    • This method is passed the BatchProcessProfile and writes its output to the filesystem. It is the method used for scheduled exports. It should function as follows

       

    • A call is made to the getBatchJobInfo(int processId) to get the current job info which executed the processBatch()
    • updateBatchProgress(BatchProcessJobBo jobBo) is called to updated the job running status to started and start time.
    • A call is made to the batchExport(BatchProcessProfile profile) method passing the profile information.
    • The needed values are pulled out of the profile information.
    • exportFilterCriteria - What exportFilterCriteria to use. The exportFormat is read from the exportFilterCriteria
      1. exportType - Full or Incremental
      2. exportId (processId)- Used for saving the incremental dateTime.
      3. exportFileTo - The directory and file name format.
      4. writeReportTo - The directory and report file name.
      5. chunkSize - Max number of records per file.
      6. staffOnly- If staffOnly records should be included.
      7. updateLeveInfo - bibOnly, instance.
    • If the export is incremental 
      1. Get the last export dateTime for the exportId
      2. save the current dateTime exportId pair.
    • call getSolrDocList(BatchProcessProfile profile) to get the solr doc list
    • A call to updateBatchProgress(BatchProcessJobBo jobBo) is called to updated the total number of records to be processed.
    • if there is more than one chunk split the solr results based on the number of chunks data
    • make a call to ExportService.getExportDataBySolr using the exportFormatProfile and one set of solr results which were split as per the chunk size.
    • If isMarc21 is true convert the returned MarcXML to Marc21.
    • Write the returned XML or Marc21 from the export service to files using the exportTo variable.
    • call the updateBatchProgress(BatchProcessJobBo jobBo) passing the jobBo which contains the current job information updated with records processed, % processed and time spent.
    • If there is more chunks make the next call to the ExportService.
    • Create the Report in the location provided in the profile.
    • updateBatchProgress(BatchProcessJobBo jobBo) is called to updated the job running status to completed and end time, total records processed, % processed and time spent.
          1. loadProfile(OLEBatchProcessDefinitionDocument processdef)
            1. Performs the profile data load from the database as per the profile id provided in the process definition document
            2. In case of Incremental export, the last export job details is retrieved and that value is used when data is queried in prepareForRead() and getNextBatch() methods
          2. prepareForRead()
            1. Reads the initial set of data for Export using solr query based on the profile information such as Data to export, filters etc.
            2. Calls the ExportDataService.getExportDataBySolr(List<SolrDocument> solrDocumentList, OLEBatchProcessProfileBo profile) method with the solr result and the profile and does the bib checkout for each solr document
            3. If the data to export is Bib only only the checkout bib is processed else if the data to export is BibAndInstance all instance data that belong to the bib is checked out
            4. Instance data checkout is verified whether it is a conventional Instance or an EInstance and processed accordingly
            5. Holding and Item mapping provided are used for mapping the holding and item information into datafileds and subfields
            6. All the data retrieved is added to the BibliographicRecord and is converted to xml format and returned
          3. prepareForWrite()
            1. Identifies the user given path (from process definition document) where the output file shall be written to, if the path is valid and the system can write the file it will write the output to the said location, else it will output the file to a default location and location will be specified in the job report
          4. processBatch()
            1. Data retrived using prepareForRead() and getNextBatch() batch method is written to file as a mrc file or xml depending on the process definition document data
            2. performs the write operation of the batch export and updates the success, failure and totalcount of the job
          5. getNextBatch()
            1. If there are more batches of data to be exported this method retrieves the next set of data
            2. performs all steps that take place in prepareForRead() method
            3. And calls processBatch() method to perform the batch export
          6. updateJobProgress()
            1. Updates the job record with details such as time taken, record count and status
        1. BatchProcessOrderRecordData

          This BatchProcessOrderRecordData performs order record ingest and creates Requisitions and Purchase Orders based on the batchProcessProfile selected by the user.

          getFileContent(BatchProcessJobBO batchProcessJobBO)


          This method takes content from the ingested mrc and edi files.

          validateFile(String mrcFileName, String ediFileName, String xmlFileName)
          This method takes file names and validate these files to perform orderRecord.

          if(true)

          ingestOrderRecord(BatchProcessProfile batchProcessProfile, BatchProcessJobBO batchProcessJobBO)
          This method takes batchProcessProfile and batchProcessJobBO as parameters and load mrc and edi files content into IngestRecord object.
          isPreProcessingRequired(MultipartFile marcFile, MultipartFile ediFile)
          This method checks file extensions returns boolean
          start(IngestRecord ingestRecord, BatchProcessProfile BatchProcessProfile)
          This method performs order record operation.
          else
          Displays error message.

        2. BatchProcessLocationIngestData

          This BatchProcessLocationIngestData gets location details from XMl and stores in db.

           

          getFileContent(BatchProcessJobBO batchProcessJobBO)
          This method takes content from the ingested mrc and edi files.

          validateFile(String xmlFileName)
          This method takes file names and validates whether ingested file is xml or not.

          if(true)
          validateContentsAgainstSchema(InputStream inputStream)

           

          persistLocationFromFileContent(String fileContent,String locationFileName);
          This method persist location details into db.

          else
          Displays error message.

        3. BatchProcessPatronIngestData


          This BatchProcessPatronIngestData gets location details from XMl and stores in db.

          getFileContent(BatchProcessJobBO batchProcessJobBO)
          This method takes content from the ingested mrc and edi files.

          validateFile(String xmlFileName)
          This method takes file names and validates whether ingested file is xml or not.

          if(true)
          validateContentsAgainstSchema(InputStream inputStream)

          persistLocationFromFileContent(String fileContent, boolean addUnMatchedPatronFlag, String fileName, OlePatronIngestSummaryRecord olePatronIngestSummaryRecord, String addressSource, String userName);

          This method persist location details into db.

          else
          Displays error message.

       

       

       


      Service Contracts
      Service Implementation
      Card
      labelProcess Diagram
      Section
       
      Column
      width50%
       

      ER Diagram:

      BATCH PROCESS PROFILE

       

       

      BATCH PROCESS

       

      Image Added

       

      Card
      labelUI Screen

       UI Screen

       

       

       

      Card
      labelData Model

      Data Model

      Entity : BATCH DEFINITION

       

      Name

      Max
      Len

      Data Type

      Default Value

      Input Type

      Action Event

      Action Event Function Name

      Validation
      Type

      Tab Index

      Remarks