995 resultados para Critical control points
Resumo:
The paper gives the theory of airborne GPs related to Photogrammetry and the results of a self calibration used to validate the theory. Accordingly, no ground control points are required for mapping using a strip or block of photographs provided the site is within 10 Km of the calibration site.
Resumo:
In the eukaryotic cell cycle, there are major control points in late G2 to determine the timing of the initiation of mitosis, and in late G1, regulating entry into S phase. In yeasts, this latter control is called start. Traverse of the start control and progression to S phase is accompanied by an increase in the expression of some of the genes whose products are required for DNA synthesis. In Saccharomyces cerevisiae, the coordinate expression of these genes in late G1 is dependent on a cis-acting sequence element called the MluI cell cycle box (MCB). A transcription factor called DSC-1 binds these elements and mediates cell cycle regulated transcription, though it is unclear whether this is by cell cycle-dependent changes in its activity. A DSC-1-like factor has also been identified in the fission yeast S.pombe. This is composed of at least the products of the cdc10 and sct1/res1 genes, and binds to the promoters of genes whose expression increases prior to S phase. We demonstrate that p85cdc10 is a nuclear protein and that the activity of the S.pombe DSC-1 factor varies through the cell cycle; it is high in cells that have passed start, decreases at the time of anaphase, remains low during the pre-start phase of G1 and increases at the time of the next S phase. We also show that the reactivation in late G1 is dependent on the G1 form of p34cdc2.
Resumo:
Les diferents normatives, europea i nacional regulen la necessitat d’establir sistemes d’autocontrolque assegurin un nivell minim de seguretat dels productes alimentaris i de les normes relatives alsmanipuladors d’aliments. Les empreses del sector alimentari son les responsables de la higiene en elsseus establiments, basant-se en l’Analisi de Perills i Punts de Control Critic (APPCC) i controlant elpossible risc contra la salut en la cadena alimentaria. Els responsables de l’empresa de platsprecuinats cuinats Jotri S.L. estan interessats en la implementacio d’un sistema d’autocontrol de laqualitat. Per aixo estan interessats en la implementacio d’un sistema d’autocontrol de la seguretatalimentaria (APPCC). Es realitzara la determinacio de punts critics a controlar en les linies de canelonsde carn i l’apliacio de mesures preventives i mesures correctores en les linies de canelons, croquetesi lasanyes
Resumo:
Résumé: Le développement rapide de nouvelles technologies comme l'imagerie médicale a permis l'expansion des études sur les fonctions cérébrales. Le rôle principal des études fonctionnelles cérébrales est de comparer l'activation neuronale entre différents individus. Dans ce contexte, la variabilité anatomique de la taille et de la forme du cerveau pose un problème majeur. Les méthodes actuelles permettent les comparaisons interindividuelles par la normalisation des cerveaux en utilisant un cerveau standard. Les cerveaux standards les plus utilisés actuellement sont le cerveau de Talairach et le cerveau de l'Institut Neurologique de Montréal (MNI) (SPM99). Les méthodes de recalage qui utilisent le cerveau de Talairach, ou celui de MNI, ne sont pas suffisamment précises pour superposer les parties plus variables d'un cortex cérébral (p.ex., le néocortex ou la zone perisylvienne), ainsi que les régions qui ont une asymétrie très importante entre les deux hémisphères. Le but de ce projet est d'évaluer une nouvelle technique de traitement d'images basée sur le recalage non-rigide et utilisant les repères anatomiques. Tout d'abord, nous devons identifier et extraire les structures anatomiques (les repères anatomiques) dans le cerveau à déformer et celui de référence. La correspondance entre ces deux jeux de repères nous permet de déterminer en 3D la déformation appropriée. Pour les repères anatomiques, nous utilisons six points de contrôle qui sont situés : un sur le gyrus de Heschl, un sur la zone motrice de la main et le dernier sur la fissure sylvienne, bilatéralement. Evaluation de notre programme de recalage est accomplie sur les images d'IRM et d'IRMf de neuf sujets parmi dix-huit qui ont participés dans une étude précédente de Maeder et al. Le résultat sur les images anatomiques, IRM, montre le déplacement des repères anatomiques du cerveau à déformer à la position des repères anatomiques de cerveau de référence. La distance du cerveau à déformer par rapport au cerveau de référence diminue après le recalage. Le recalage des images fonctionnelles, IRMf, ne montre pas de variation significative. Le petit nombre de repères, six points de contrôle, n'est pas suffisant pour produire les modifications des cartes statistiques. Cette thèse ouvre la voie à une nouvelle technique de recalage du cortex cérébral dont la direction principale est le recalage de plusieurs points représentant un sillon cérébral. Abstract : The fast development of new technologies such as digital medical imaging brought to the expansion of brain functional studies. One of the methodolgical key issue in brain functional studies is to compare neuronal activation between individuals. In this context, the great variability of brain size and shape is a major problem. Current methods allow inter-individual comparisions by means of normalisation of subjects' brains in relation to a standard brain. A largerly used standard brains are the proportional grid of Talairach and Tournoux and the Montreal Neurological Insititute standard brain (SPM99). However, there is a lack of more precise methods for the superposition of more variable portions of the cerebral cortex (e.g, neocrotex and perisyvlian zone) and in brain regions highly asymmetric between the two cerebral hemipsheres (e.g. planum termporale). The aim of this thesis is to evaluate a new image processing technique based on non-linear model-based registration. Contrary to the intensity-based, model-based registration uses spatial and not intensitiy information to fit one image to another. We extract identifiable anatomical features (point landmarks) in both deforming and target images and by their correspondence we determine the appropriate deformation in 3D. As landmarks, we use six control points that are situated: one on the Heschl'y Gyrus, one on the motor hand area, and one on the sylvian fissure, bilaterally. The evaluation of this model-based approach is performed on MRI and fMRI images of nine of eighteen subjects participating in the Maeder et al. study. Results on anatomical, i.e. MRI, images, show the mouvement of the deforming brain control points to the location of the reference brain control points. The distance of the deforming brain to the reference brain is smallest after the registration compared to the distance before the registration. Registration of functional images, i.e fMRI, doesn't show a significant variation. The small number of registration landmarks, i.e. six, is obvious not sufficient to produce significant modification on the fMRI statistical maps. This thesis opens the way to a new computation technique for cortex registration in which the main directions will be improvement of the registation algorithm, using not only one point as landmark, but many points, representing one particular sulcus.
Resumo:
Simulation is a useful tool in cardiac SPECT to assess quantification algorithms. However, simple equation-based models are limited in their ability to simulate realistic heart motion and perfusion. We present a numerical dynamic model of the left ventricle, which allows us to simulate normal and anomalous cardiac cycles, as well as perfusion defects. Bicubic splines were fitted to a number of control points to represent endocardial and epicardial surfaces of the left ventricle. A transformation from each point on the surface to a template of activity was made to represent the myocardial perfusion. Geometry-based and patient-based simulations were performed to illustrate this model. Geometry-based simulations modeled ~1! a normal patient, ~2! a well-perfused patient with abnormal regional function, ~3! an ischaemic patient with abnormal regional function, and ~4! a patient study including tracer kinetics. Patient-based simulation consisted of a left ventricle including a realistic shape and motion obtained from a magnetic resonance study. We conclude that this model has the potential to study the influence of several physical parameters and the left ventricle contraction in myocardial perfusion SPECT and gated-SPECT studies.
Resumo:
L’objectiu del treball ha estat elaborar un manual d’implantació del sistema d'Anàlisi de Perills i Punts Crítics de Control (APPCC) que serveixi de guia a les microcerveseries. En la primera part del manual s’ha realitzat una introducció on s’expliquen els orígens del sistema i es defineix l’APPCC com un sistema que permet identificar, avaluar i controlar els perills significatius en totes les fases de producció, transformació i distribució amb l’objectiu de garantir la innocuïtat dels aliments. Seguidament, s’ha mostrat la importància del Codex Alimentarius, on es troben descrites les directives per a l’aplicació del sistema d’APPCC. També s’ha indicat que el Reglament CE 852/2004, relatiu a la higiene dels productes alimentaris, obliga a les empreses alimentaries a crear, implementar i mantenir procediments permanents basats en els principis del sistema d’APPCC. Posteriorment, s’han exposat els aspectes que cal considerar per a poder implantar un sistema d’APPCC en una microcerveseria. S’han descrit aspectes com el marc legal del sector cerveser, la flexibilitat en l’aplicació del sistema d’autocontrol en les microempreses i els prerequisits. En l’annex 1, s’han desenvolupat els prerequisits necessaris per a les microcerveseries, i en l’annex 2, s’han mostrat exemples de registres derivats de la seva aplicació. Els prerequisits són bàsics per assegurar un entorn i unes condicions de treball higièniques, reduint o eliminant perills de contaminació en els aliments. Els prerequisits consideren els perills generals de l’entorn de treball. En canvi, l’APPCC considera els perills específics del procés de producció. Tot seguit, s’han definit els principis bàsics i les directrius per a l’aplicació del sistema d’APPCC descrits en el Codex Alimentarius, explicant les diferents fases que cal seguir per a la correcta implantació del sistema d’autocontrol. En la segona part del manual, s’ha desenvolupat el sistema d’APPCC per al procés d’elaboració de la cervesa artesana
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
Estudi sobre la implantació d’un manual de gestió en una indústria dellescats, realitzat a l’empresa Serra&Mota de la Cellera de Ter (consta d’un sistema APPCC i un manual de qualitat)
Resumo:
Debido a la coyuntura actual y a la liberalización del comercio de alimentos, es necesariogarantizar la seguridad e inocuidad de los alimentos, las cuales se conseguirán aplicando lasprácticas y las normas apropiadas en materia de seguridad alimentaria. Los reglamentoscomunitarios en materia de seguridad alimentaria, y en particular el Reglamento (CE) nº852/2004 relativo a la higiene de los alimentos, obligan a las empresas alimentarias a aplicarun sistema de autocontrol basado en los principios del Análisis de Peligros y Puntos Críticosde Control (APPCC). En España, la legislación nacional ya contemplaba este requisito. En elámbito de las comidas preparadas, esta exigencia se recogía expresamente en el Real Decreto3484/200, de 29 de diciembre, por el que se establecen las normas de higiene para laelaboración, distribución y comercio de comidas preparadas.En el presente Proyecto/Trabajo Final de Carrera (PTFC), se ha elaborado un manual, queconsta de ocho capítulos, unos anexos y una memoria descriptiva, que servirá de guía para eldiseño de un sistema de APPCC, siguiendo los procedimientos definidos en el CodexAlimentarius, que permita su aplicación a las industrias elaboradoras de creps y a la vezproporcione las pautas necesarias para su implantación. Debido a la imposibilidad de aplicarel sistema de APPCC desarrollado a una industria concreta, se ha elaborado un manualgenérico que cada industria deberá adaptar a sus sistema productivo
Resumo:
The aim of the thesis was to study quality management with process approach and to find out how to utilize process management to improve quality. The operating environment of organizations has changed. Organizations are focusing on their core competences and networking with suppliers and customers to ensure more effective and efficient value creation for the end customer. Quality management is moving from inspection of the output to prevention of problems from occurring in the first place and management thinking is changing from functional approach to process approach. In the theoretical part of the thesis, it is studied how to define quality, how to achieve good quality, how to improve quality, and how to make sure the improvement goes on as never ending cycle. A selection of quality tools is introduced. Process approach to quality management is described and compared to functional approach, which is the traditional way to manage operations and quality. The customer focus is also studied, and it is presented, that to ensure long term customer commitment, organization needs to react to changing customer requirements and wishes by constantly improving the processes. In the experimental part the theories are tested in a process improvement business case. It is shown how to execute a process improvement project starting from defining the customer requirements, continuing to defining the process ownership, roles and responsibilities, boundaries, interfaces and the actual process activities. The control points and measures are determined for the process, as well as the feedback and corrective action process, to ensure continual improvement can be achieved and to enable verification that customer requirements are fulfilled.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
Shrimp Aquaculture has provided tremendous opportunity for the economic and social upliftment of rural communities in the coastal areas of our country Over a hundred thousand farmers, of whom about 90% belong to the small and marginal category, are engaged in shrimp farming. Penaeus monodon is the most predominant cultured species in India which is mainly exported to highly sophisticated, quality and safety conscious world markets. Food safety has been of concem to humankind since the dawn of history and the concern about food safety resulted in the evolution of a cost effective, food safety assurance method, the Hazard Analysis Critical Control Point (HACCP). Considering the major contribution of cultured Penaeus monodon to the total shrimp production and the economic losses encountered due to disease outbreak and also because traditional methods of quality control and end point inspection cannot guarantee the safety of our cultured seafood products, it is essential that science based preventive approaches like HACCP and Pre requisite Programmes (PRP) be implemented in our shrimp farming operations. PRP is considered as a support system which provides a solid foundation for HACCP. The safety of postlarvae (PL) supplied for brackish water shrimp farming has also become an issue of concern over the past few years. The quality and safety of hatchery produced seeds have been deteriorating and disease outbreaks have become very common in hatcheries. It is in this context that the necessity for following strict quarantine measures with standards and code of practices becomes significant. Though there were a lot of hue and cry on the need for extending the focus of seafood safety assurance from processing and exporting to the pre-harvest and hatchery rearing phases, an experimental move in this direction has been rare or nil. An integrated management system only can assure the effective control of the quality, hygiene and safety related issues. This study therefore aims at designing a safety and quality management system model for implementation in shrimp farming and hatchery operations by linking the concepts of HACCP and PRP.
Resumo:
In the course of the ‘Livestock Revolution’, extension and intensification of, among others, ruminant livestock production systems are current phenomena, with all their positive and negative side effects. Manure, one of the inevitable secondary products of livestock rearing, is a valuable source of plant nutrients and its skillful recycling to the soil-plant interface is essential for soil fertility, nutrient - and especially phosphorus - uses efficiency and the preservation or re-establishment of environmentally sustainable farming systems, for which organic farming systems are exemplarily. Against this background, the PhD research project presented here, which was embedded in the DFG-funded Research Training Group 1397 ‘Regulation of soil organic matter and nutrient turnover in organic agriculture ’ investigated possibilities to manipulate the diets of water buffalo (Bubalus bubalis L.) so as to produce manure of desired quality for organic vegetable production, without affecting the productivity of the animals used. Consisting of two major parts, the first study (chapter 2) tested the effects of diets differing in their ratios of carbon (C) to nitrogen (N) and of structural to non-structural carbohydrates on the quality of buffalo manure under subtropical conditions in Sohar, Sultanate of Oman. To this end, two trials were conducted with twelve water buffalo heifers each, using a full Latin Square design. One control and four tests diets were examined during three subsequent 7 day experimental periods preceded each by 21 days adaptation. Diets consisted of varying proportions of Rhodes grass hay, soybean meal, wheat bran, maize, dates, and a commercial concentrate to achieve a (1) high C/N and high NDF (neutral detergent fibre)/SC (soluble carbohydrate) ratio (HH), (2) low C/N and low NDF/SC ratio (LL); (3) high C/N and low NDF/SC ratio (HL) and (4) low C/N and high NDF/SC (LH) ratio. Effects of these diets, which were offered at 1.45 times maintenance requirements of metabolizable energy, and of individual diet characteristics, respectively, on the amount and quality of faeces excreted were determined and statistically analysed. The faeces produced from diets HH and LL were further tested in a companion PhD study (Mr. K. Siegfried) concerning their nutrient release in field experiments with radish and cabbage. The second study (chapter 3) focused on the effects of the above-described experimental diets on the rate of passage of feed particles through the gastrointestinal tract of four randomly chosen animals per treatment. To this end, an oral pulse dose of 683 mg fibre particles per kg live weight marked with Ytterbium (Yb; 14.5 mg Yb g-1 organic matter) was dosed at the start of the 7 day experimental period which followed 21 days of adaptation. During the first two days a sample for Yb determination was kept from each faecal excretion, during days 3 – 7 faecal samples were kept from the first morning and the first evening defecation only. Particle passage was modelled using a one-compartment age-dependent Gamma-2 model. In both studies individual feed intake and faecal excretion were quantified throughout the experimental periods and representative samples of feeds and faeces were subjected to proximate analysis following standard protocols. In the first study the organic matter (OM) intake and excretion of LL and LH buffaloes were significantly lower than of HH and HL animals, respectively. Digestibility of N was highest in LH (88%) and lowest in HH (74%). While NDF digestibility was also highest in LH (85%) it was lowest in LL (78%). Faecal N concentration was positively correlated (P≤0.001) with N intake, and was significantly higher in faeces excreted by LL than by HH animals. Concentrations of fibre and starch in faecal OM were positively affected by the respective dietary concentrations, with NDF being highest in HH (77%) and lowest in LL (63%). The faecal C/N ratio was positively related (P≤0.001) to NDF intake; C/N ratios were 12 and 7 for HH and LL (P≤0.001), while values for HL and LH were 11.5 and 10.6 (P>0.05). The results from the second study showed that dietary N concentration was positively affecting faecal N concentration (P≤0.001), while there was a negative correlation with the faecal concentration of NDF (P≤0.05) and the faecal ratios of NDF/N and C/N (P≤0.001). Particle passage through the mixing compartment was lower (P≤0.05) for HL (0.033 h-1) than for LL (0.043 h-1) animals, while values of 0.034 h-1 and 0.038 h-1 were obtained for groups LH and HH. At 55.4 h, total tract mean retention time was significantly (P≤0.05) lower in group LL that in all other groups where these values varied between 71 h (HH) and 79 h (HL); this was probably due to the high dietary N concentration of diet LL which was negatively correlated with time of first marker appearance in faeces (r= 0.84, P≤0.001), while the dietary C concentration was negatively correlated with particle passage through the mixing compartment (r= 0.57, P≤0.05). The results suggest that manure quality of river buffalo heifers can be considerably influenced by diet composition. Despite the reportedly high fibre digestion capacity of buffalo, digestive processes did not suppress the expression of diet characteristics in the faeces. This is important when aiming at producing a specific manure quality for fertilization purposes in (organic) crop cultivation. Although there was a strong correlation between the ingestion and the faecal excretion of nitrogen, the correlation between diet and faecal C/N ratio was weak. To impact on manure mineralization, the dietary NDF and N concentrations seem to be the key control points, but modulating effects are achieved by the inclusion of starch into the diet. Within the boundaries defined by the animals’ metabolic and (re)productive requirements for energy and nutrients, diet formulation may thus take into account the abiotically and biotically determined manure turnover processes in the soil and the nutrient requirements of the crops to which the manure is applied, so as to increase nutrient use efficiency along the continuum of the feed, the animal, the soil and the crop in (organic) farming systems.