페이지 트리

버전 비교

  • 이 줄이 추가되었습니다.
  • 이 줄이 삭제되었습니다.
  • 서식이 변경되었습니다.

The processing of data can be tightly integrated into data handling systems, or can be delegated to a separate set of services invoked on demand. In general, the more complicated processing tasks will require the use of separated services. The provision of dedicated processing services becomes significantly more important when large quantities of data are being curated within a research infrastructure. Scientific data is an example which is often subject to extensive post-processing and analysis in order to extract new results. The data processing objects of an infrastructure encapsulate the dedicated processing services made available to that infrastructure, either within the infrastructure itself or delegated to a client infrastructure.



패널
borderColor black
borderWidth 1
borderStyle solid
titleData Processing Objects

Image Modified

Notation of Computational Viewpoint Models#notation_cv_objects

 

 

CV data processing objects are described as a set of CV Component Objects#process_process controller (representing the computational functionality of registered execution resources) monitored and managed by a CV Service Objects#coordination_coordination service. The coordination service delegates all processing tasks sent to particular execution resources, coordinates multi-stage workflows and initiates execution. Data may need to be data_staging onto be staged onto individual execution resources and results data_persistence  persisted for future use; data channels can be established with resources via their process controllers. The following diagrams shows the staging and persistence of data.

...

Data processing requests generally originate from CV Presentation Objects#experiment_experiment laboratory which validate requests by invoking an  AAAI service . The  CV Presentation Objects#experiment_experiment laboratory will send a process request to a coordination service , which interprets the request and starts a processing workflow by invoking the required CV Component Objects#process_process controller. Data will be retrieved from the data store and passed to the execution platform, the  coordination service will request that a data transfer service to prepare a data transfer.

Data will be retrieved from the data store and passed to the execution platform, the  coordination service will request that a data transfer service to prepare a data transfer. The  data transfer service will then configure and deploy a CV Component Objects#data_data exporter which will handle the transfer of data between the storage and execution platforms, i.e. performing data staging. A data-flow is established between all required  CV Component Objects#data_data store _ controllers and  CV Component Objects#process_process controller via the  CV Component Objects#data_data exporter. After the data-flow is established, processing starts. Processing can include a host of activities such as summarising, mining, charting, mapping, amongst many others. The details are left open to allow the modelling of any processing procedure. The expected output of the processing activities is a derived data product, which in turn will need to be persisted into the RIs data stores.

 

1
패널
borderColor black
borderWidth
borderStyle solid
titleData Processing Subsystem - data staging

Image Modified

Notation of Computational Viewpoint Models#notation_cv_objects

 

Data Persistence
책갈피
data_persistence
data_persistence

...

Data processing requests generally originate from CV Presentation Objects#experiment_experiment laboratory which validate requests by invoking an  AAAI service . The  CV Presentation Objects#experiment_experiment laboratory can present results and ask the user if the results need to be stored, alternatively the user may configure the service to automatically store the resulting data. In either case, after processing, the CV Presentation Objects#experiment_experiment laboratory will send a process request to the coordination service , which interprets the request and invokes the CV Component Objects#process_process controller which will get the result data ready for transfer.

The  data transfer service will then configure and deploy a CV Component Objects#data_data importer which will handle the transfer of data between the execution and storage platforms. A data-flow is established between CV Component Objects#process_controller and CV Component Objects#data_store_process controller and data store controller via the  CV Component Objects#data_data importer. After the data-flow is established, the data transfer starts. The persistence if data will trigger various curation activities including data storage, backup, updating of catalogues, requiring identifiers and updating records. These activities can occurs automatically or just as signals sent out to warn human users that an action is expected.

 

solid
패널
borderColor black
borderWidth 1
borderStyle
titleData Processing Subsystem - data persistence

Notation of Computational Viewpoint Models#notation_cv_objects