953 resultados para Complex combinatorial problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the advances in sensor networks and remote sensing technologies, the acquisition and storage rates of meteorological and climatological data increases every day and ask for novel and efficient processing algorithms. A fundamental problem of data analysis and modeling is the spatial prediction of meteorological variables in complex orography, which serves among others to extended climatological analyses, for the assimilation of data into numerical weather prediction models, for preparing inputs to hydrological models and for real time monitoring and short-term forecasting of weather.In this thesis, a new framework for spatial estimation is proposed by taking advantage of a class of algorithms emerging from the statistical learning theory. Nonparametric kernel-based methods for nonlinear data classification, regression and target detection, known as support vector machines (SVM), are adapted for mapping of meteorological variables in complex orography.With the advent of high resolution digital elevation models, the field of spatial prediction met new horizons. In fact, by exploiting image processing tools along with physical heuristics, an incredible number of terrain features which account for the topographic conditions at multiple spatial scales can be extracted. Such features are highly relevant for the mapping of meteorological variables because they control a considerable part of the spatial variability of meteorological fields in the complex Alpine orography. For instance, patterns of orographic rainfall, wind speed and cold air pools are known to be correlated with particular terrain forms, e.g. convex/concave surfaces and upwind sides of mountain slopes.Kernel-based methods are employed to learn the nonlinear statistical dependence which links the multidimensional space of geographical and topographic explanatory variables to the variable of interest, that is the wind speed as measured at the weather stations or the occurrence of orographic rainfall patterns as extracted from sequences of radar images. Compared to low dimensional models integrating only the geographical coordinates, the proposed framework opens a way to regionalize meteorological variables which are multidimensional in nature and rarely show spatial auto-correlation in the original space making the use of classical geostatistics tangled.The challenges which are explored during the thesis are manifolds. First, the complexity of models is optimized to impose appropriate smoothness properties and reduce the impact of noisy measurements. Secondly, a multiple kernel extension of SVM is considered to select the multiscale features which explain most of the spatial variability of wind speed. Then, SVM target detection methods are implemented to describe the orographic conditions which cause persistent and stationary rainfall patterns. Finally, the optimal splitting of the data is studied to estimate realistic performances and confidence intervals characterizing the uncertainty of predictions.The resulting maps of average wind speeds find applications within renewable resources assessment and opens a route to decrease the temporal scale of analysis to meet hydrological requirements. Furthermore, the maps depicting the susceptibility to orographic rainfall enhancement can be used to improve current radar-based quantitative precipitation estimation and forecasting systems and to generate stochastic ensembles of precipitation fields conditioned upon the orography.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forensic scientists face increasingly complex inference problems for evaluating likelihood ratios (LRs) for an appropriate pair of propositions. Up to now, scientists and statisticians have derived LR formulae using an algebraic approach. However, this approach reaches its limits when addressing cases with an increasing number of variables and dependence relationships between these variables. In this study, we suggest using a graphical approach, based on the construction of Bayesian networks (BNs). We first construct a BN that captures the problem, and then deduce the expression for calculating the LR from this model to compare it with existing LR formulae. We illustrate this idea by applying it to the evaluation of an activity level LR in the context of the two-trace transfer problem. Our approach allows us to relax assumptions made in previous LR developments, produce a new LR formula for the two-trace transfer problem and generalize this scenario to n traces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydrological and biogeochemical processes that operate in catchments influence the ecological quality of freshwater systems through delivery of fine sediment, nutrients and organic matter. Most models that seek to characterise the delivery of diffuse pollutants from land to water are reductionist. The multitude of processes that are parameterised in such models to ensure generic applicability make them complex and difficult to test on available data. Here, we outline an alternative - data-driven - inverse approach. We apply SCIMAP, a parsimonious risk based model that has an explicit treatment of hydrological connectivity. we take a Bayesian approach to the inverse problem of determining the risk that must be assigned to different land uses in a catchment in order to explain the spatial patterns of measured in-stream nutrient concentrations. We apply the model to identify the key sources of nitrogen (N) and phosphorus (P) diffuse pollution risk in eleven UK catchments covering a range of landscapes. The model results show that: 1) some land use generates a consistently high or low risk of diffuse nutrient pollution; but 2) the risks associated with different land uses vary both between catchments and between nutrients; and 3) that the dominant sources of P and N risk in the catchment are often a function of the spatial configuration of land uses. Taken on a case-by-case basis, this type of inverse approach may be used to help prioritise the focus of interventions to reduce diffuse pollution risk for freshwater ecosystems. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Innovation is the word of this decade. According to innovation definitions, without positive sales impact and meaningful market share the company’s product or service has not been an innovation. Research problem of this master thesis is to find out what is the innovation process of complex new consumer products and services in new innovation paradigm. The objective is to get answers to two research questions: 1) What are the critical success factors what company should do when it is implementing the paradigm change in mass markets consumer business with complex products and services? 2) What is the process or framework one firm could follow? The research problem is looked from one company’s innovation creation process, networking and organization change management challenges point of views. Special focus is to look the research problem from an existing company perspective which is entering new business area. Innovation process management framework of complex new consumer products and services in new innovation paradigm has been created with support of several existing innovation theories. The new process framework includes the critical innovation process elements companies should take into consideration in their daily activities when they are in their new business innovation implementing process. Case company location based business implementation activities are studied via the new innovation process framework. This case study showed how important it is to manage the process, look how the target market and the competition in it is developing during company’s own innovation process, make decisions at right time and from beginning plan and implement the organization change management as one activity in the innovation process. In the end this master thesis showed that all companies need to create their own innovation process master plan with milestones and activities. One plan does not fit all, but all companies can start their planning from the new innovation process what was introduced in this master thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mechanism whereby cytochrome £ oxidase catalyses elec-. tron transfer from cytochrome £ to oxygen remains an unsolved problem. Polarographic and spectrophotometric activity measurements of purified, particulate and soluble forms of beef heart mitochondrial cytochrome c oxidase presented in this thesis confirm the following characteristics of the steady-state kinetics with respect to cytochrome £: (1) oxidation of ferrocytochrome c is first order under all conditions. -(2) The relationship between sustrate concentration and velocity is of the Michaelis-Menten type over a limited range of substrate. concentrations at high ionic strength. (3) ~he reaction rate is independent from oxygen concentration until very low levels of oxygen. (4) "Biphasic" kinetic plots of enzyme activity as a function of substrate concentration are found when the range of cytochrome c concentrations is extended; the biphasicity ~ is more apparent in low ionic strength buffer. These results imply two binding sites for cytochrome £ on the oxidase; one of high affinity and one of low affinity with Km values of 1.0 pM and 3.0 pM, respectively, under low ionic strength conditions. (5) Inhibition of the enzymic rate by azide is non-c~mpetitive with respect to cytochrome £ under all conditions indicating an internal electron transfer step, and not binding or dissociation of £ from the enzyme is rate limiting. The "tight" binding of cytochrome '£ to cytochrome c oxidase is confirmed in column chromatographic experiments. The complex has a cytochrome £:oxidase ratio of 1.0 and is dissociated in media of high ionic strength. Stopped-flow spectrophotometric studies of the reduction of equimolar mixtures and complexes of cytochrome c and the oxidase were initiated in an attempt to assess the functional relevance of such a complex. Two alternative routes -for reduction of the oxidase, under conditions where the predominant species is the £ - aa3 complex, are postulated; (i) electron transfer via tightly bound cytochrome £, (ii) electron transfer via a small population of free cytochrome c interacting at the "loose" binding site implied from kinetic studies. It is impossible to conclude, based on the results obtained, which path is responsible for the reduction of cytochrome a. The rate of reduction by various reductants of free cytochrome £ in high and low ionic strength and of cytochrome £ electrostatically bound to cytochrome oxidase was investigated. Ascorbate, a negatively charged reagent, reduces free cytochrome £ with a rate constant dependent on ionic strength, whereas neutral reagents TMPD and DAD were relatively unaffected by ionic strength in their reduction of cytochrome c. The zwitterion cysteine behaved similarly to uncharged reductants DAD and TI~PD in exhibiting only a marginal response to ionic strength. Ascorbate reduces bound cytochrome £ only slowly, but DAD and TMPD reduce bound cytochrome £ rapidly. Reduction of cytochrome £ by DAD and TMPD in the £ - aa3 complex was enhanced lO-fold over DAD reduction of free £ and 4-fold over TMPD reduction of free c. Thus, the importance of ionic strength on the reactivity of cytochrome £ was observed with the general conclusion being that on the cytochrome £ molecule areas for anion (ie. phosphate) binding, ascorbate reduction and complexation to the oxidase overlap. The increased reducibility for bound cytochrome £ by reductants DAD and TMPD supports a suggested conformational change of electrostatically bound c compare.d to free .£. In addition, analysis of electron distribution between cytochromes £ and a in the complex suggest that the midpotential of cytochrome ~ changes with the redox state of the oxidase. Such evidence supports models of the oxidase which suggest interactions within the enzyme (or c - enzyme complex) result in altered midpoint potentials of the redox centers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of a large and reliable DNA codeword library is a key problem in DNA based computing. DNA codes, namely sets of fixed length edit metric codewords over the alphabet {A, C, G, T}, satisfy certain combinatorial constraints with respect to biological and chemical restrictions of DNA strands. The primary constraints that we consider are the reverse--complement constraint and the fixed GC--content constraint, as well as the basic edit distance constraint between codewords. We focus on exploring the theory underlying DNA codes and discuss several approaches to searching for optimal DNA codes. We use Conway's lexicode algorithm and an exhaustive search algorithm to produce provably optimal DNA codes for codes with small parameter values. And a genetic algorithm is proposed to search for some sub--optimal DNA codes with relatively large parameter values, where we can consider their sizes as reasonable lower bounds of DNA codes. Furthermore, we provide tables of bounds on sizes of DNA codes with length from 1 to 9 and minimum distance from 1 to 9.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cet article discute des problèmes de gouvernance et de corruption en Afrique dans le cadre d’un débat politique et philosophique large entre universalisme et relativisme, idéalisme et réalisme, ainsi que entre individualisme et communautarisme. Premièrement, je défends que l’approche réaliste de l’éthique politique et du leadership ne permet pas de différencier entre les éléments descriptifs et prescriptifs de la gouvernance et peut aisément être utilisée pour justifier « les Mains Sales » des dirigeants au nom de l’intérêt supérieur de la nation, même dans les cas où l’intérêt personnel est la seule force motivationnelle pour les actions qui sapent les codes sociaux et éthiques ordinaires. Deuxièmement, l’article montre la faillite de la confiance publique dans le gouvernement et la faiblesse de l’Etat renforce les politiques communautariennes sub-nationales qui tendent à être fondées sur l’ethnie et exclusive, et par conséquent, qui viole le cœur de l’éthique publique, c’est-à-dire l’impartialité. Finalement, l’article suggère que les principes d’éthique universels pour les services publiques soient introduits en complément plutôt qu’en concurrence avec les éthiques locales, socialement et culturellement limitée au privé. Cela requière, d’une part, que nous comprenions mieux la complexité historique, les circonstances économiques et sociales et les arrangements politiques transitionnels dans les pays africains. D’autre part, un nous devons investir dans une éducation éthique civique et professionnel réflexive qui adopte un point de vue nuancé entre le réalisme politique et l’idéalisme comme point de départ des réformes institutionnelles, aussi bien que modalité de changement des comportements à long terme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le cœur des vertébrés est un organe modulaire qui requiert le " patterning " complexe des champs morphogénétiques cardiogènes et la convergence coordonnée des diverses sous-populations de progéniteurs cardiogéniques. Au moins 7 facteurs de transcription de la famille T-box coopèrent au sein de ces nombreuses sous-populations de progéniteurs cardiogéniques afin de réguler la morphogenèse et l’agencement de multiples structures le long de l’ébauche cardiaque, ce qui explique que les mutations humaines de ces gènes engendrent diverses malformations congénitales cardiaques (MCCs). L’un de ces gènes T-box, Tbx5, dont l’haploinsuffisance génère le syndrome de Holt-Oram (SHO), intervient dans une grande variété de réseaux de régulation géniques (RRGs) qui orchestrent la morphogenèse des oreillettes, du ventricule gauche, de la valve mitrale, des septums inter-auriculaire et inter-ventriculaire, ainsi que du système de conduction cardiaque. La diversité des RRGs impliqués dans la formation de ces structures cardiaques suggère que Tbx5 détient une profusion de fonctions qui ne seront identifiables qu’en répertoriant ses activités moléculaires dans chaque lignée cardiaque examinée isolément. Afin d’aborder cette problématique, une ablation génétique de Tbx5 dans l’endocarde a été réalisée. Cette expérience a démontré le rôle crucial de Tbx5 dans la survie des cellules endocardiques bordant le septum primum et des cardiomyocytes au sein de cette structure embryonnaire qui contribuera à la morphogenèse du septum inter-auriculaire. En outre, cette étude a révélé l’existence d’une communication croisée entre la sous-population de cellules endocardiques Tbx5+ et le myocarde au niveau du septum primum, afin d’assurer la survie des cardiomyocytes, et ultimement de garantir la maturation du septum inter-auriculaire. Nos résultats confirment aussi l’importance de l’interdépendance génétique (Tbx5 et Gata4 ainsi que Tbx5 et Nos3) entre différents loci dans la morphogenèse de la cloison inter-auriculaire, et particulièrement de l’influence que peut avoir l’environnement sur la pénétrance et l’expressivité des communications inter-auriculaires (CIAs) dans le SHO. En outre, puisque les fonctions d’un gène dépendent ordinairement des différents isoformes qu’il peut générer, une deuxième étude a focalisé davantage sur l’aspect transcriptionnel de Tbx5. Cette approche a mené à la découverte de 6 transcrits alternatifs exhibant des fonctions à la fois communes et divergentes. La caractérisation de 2 de ces isoformes a révélé le rôle de l’isoforme long (Tbx5_v1) dans la régulation de la croissance des cardiomyocytes durant la cardiogénèse, tandis que l’isoforme court (Tbx5_v2), préférentiellement exprimé dans le cœur mature, réprime la croissance cellulaire. Il est donc entièrement concevable que les mutations de TBX5 entraînant une troncation de la région C-terminale accroissent la concentration d’une protéine mutée qui, à l’instar de Tbx5_v2, interfère avec la croissance de certaines structures cardiaques. En revanche, la divergence de fonctions de ces isoformes, caractérisée par les disparités de localisation subcellulaire et de d’interaction avec d’autres cofacteurs cardiaques, suggère que les mutations affectant davantage un isoforme favoriseraient l’émergence d’un type particulier de MCC. Finalement, un dernier objectif était d’identifier le ou les mécanisme(s) moléculaire(s) par le(s)quel(s) Tbx5 régule son principal gène cible, Nppa, et d’en extraire les indices qui éclairciraient sa fonction transcriptionnelle. Cet objectif nécessitait dans un premier lieu d’identifier les différents modules cis-régulateurs (MCRs) coordonnant la régulation transcriptionnelle de Nppa et Nppb, deux gènes natriurétiques dont l’organisation en tandem et le profil d’expression durant la cardiogénèse sont conservés dans la majorité des vertébrés. L’approche d’empreinte phylogénétique employée pour scanner le locus Nppb/Nppa a permis d’identifier trois MCRs conservés entre diverses espèces de mammifères, dont un (US3) est spécifique aux euthériens. Cette étude a corroboré que la régulation de l’expression du tandem génique Nppb/Nppa requérait l’activité transcriptionnelle d’enhancers en complément aux promoteurs de Nppa et Nppb. La concordance quasiment parfaite entre les profils d’expression de Tbx5 et de ces deux gènes natriurétiques chez les mammifères, suggère que le gradient d’expression ventriculaire de Tbx5 est interprété par le recrutement de ce facteur au niveau des différents enhancers identifiés. En somme, les études présentées dans cette thèse ont permis de clarifier la profusion de fonctions cardiaques que possède Tbx5. Certaines de ces fonctions émanent de l’épissage alternatif de Tbx5, qui favorise la synthèse d’isoformes dotés de propriétés spécifiques. Les diverses interactions combinatoires entre ces isoformes et d’autres facteurs cardiaques au sein des diverses sous-populations de progéniteurs cardiogènes contribuent à l’émergence de RRGs cardiaques divergents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assembly job shop scheduling problem (AJSP) is one of the most complicated combinatorial optimization problem that involves simultaneously scheduling the processing and assembly operations of complex structured products. The problem becomes even more complicated if a combination of two or more optimization criteria is considered. This thesis addresses an assembly job shop scheduling problem with multiple objectives. The objectives considered are to simultaneously minimizing makespan and total tardiness. In this thesis, two approaches viz., weighted approach and Pareto approach are used for solving the problem. However, it is quite difficult to achieve an optimal solution to this problem with traditional optimization approaches owing to the high computational complexity. Two metaheuristic techniques namely, genetic algorithm and tabu search are investigated in this thesis for solving the multiobjective assembly job shop scheduling problems. Three algorithms based on the two metaheuristic techniques for weighted approach and Pareto approach are proposed for the multi-objective assembly job shop scheduling problem (MOAJSP). A new pairing mechanism is developed for crossover operation in genetic algorithm which leads to improved solutions and faster convergence. The performances of the proposed algorithms are evaluated through a set of test problems and the results are reported. The results reveal that the proposed algorithms based on weighted approach are feasible and effective for solving MOAJSP instances according to the weight assigned to each objective criterion and the proposed algorithms based on Pareto approach are capable of producing a number of good Pareto optimal scheduling plans for MOAJSP instances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Designing is a heterogeneous, fuzzily defined, floating field of various activities and chunks of ideas and knowledge. Available theories about the foundations of designing as presented in "the basic PARADOX" (Jonas and Meyer-Veden 2004) have evoked the impression of Babylonian confusion. We located the reasons for this "mess" in the "non-fit", which is the problematic relation of theories and subject field. There seems to be a comparable interface problem in theory-building as in designing itself. "Complexity" sounds promising, but turns out to be a problematic and not really helpful concept. I will argue for a more precise application of systemic and evolutionary concepts instead, which - in my view - are able to model the underlying generative structures and processes that produce the visible phenomenon of complexity. It does not make sense to introduce a new fashionable meta-concept and to hope for a panacea before having clarified the more basic and still equally problematic older meta-concepts. This paper will take one step away from "theories of what" towards practice and doing and try to have a closer look at existing process models or "theories of how" to design instead. Doing this from a systemic perspective leads to an evolutionary view of the process, which finally allows to specify more clearly the "knowledge gaps" inherent in the design process. This aspect has to be taken into account as constitutive of any attempt at theory-building in design, which can be characterized as a "practice of not-knowing". I conclude, that comprehensive "unified" theories, or methods, or process models run aground on the identified knowledge gaps, which allow neither reliable models of the present, nor reliable projections into the future. Consolation may be found in performing a shift from the effort of adaptation towards strategies of exaptation, which means the development of stocks of alternatives for coping with unpredictable situations in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cubicle should provide good resting comfort as well as clean udders. Dairy cows in cubicle houses often face a restrictive environment with regard to resting behaviour, whereas cleanliness may still be impaired. This study aimed to determine reliable behavioural measures regarding resting comfort applicable in on-farm welfare assessments. Furthermore, relationships between cubicle design, cow sizes, management factors and udder cleanliness (namely teats and teat tips) were investigated. Altogether 15 resting measures were examined in terms of feasibility, inter-observer reliability (IOR) and consistency of results per farm over time. They were recorded during three farm visits on farms in Germany and Austria with cubicle, deep litter and tie stall systems. Seven measures occurred to infrequently to allow reliable recording within a limited observation time. IOR was generally acceptable to excellent except for 'collisions during lying down', which only showed good IOR after improvement of the definition. Only three measures were acceptably repeatable over time: 'duration of lying down', 'percentage of collisions during lying down' and 'percentage of cows lying partly or completely outside lying area'. These measures were evaluated as suitable animal based welfare measures regarding resting behaviour in the framework of an on-farm welfare assessment protocol. The second part of the thesis comprises a cross-sectional study on resting comfort and cow cleanliness including 23 Holstein Friesian dairy herds with very low within-farm variation in cubicle measures. Height at withers, shoulder width and diagonal body length were measured in 79-100 % of the cows (herd size 30 to115 cows). Based on the 25 % largest animals, compliance with recommendations for cubicle measures was calculated. Cleanliness of different body parts, the udder, teats and teat tips was assessed for each cow in the herd prior to morning milking. No significant correlation was found between udder soiling and teat or teat tip soiling on herd level. The final model of a stepwise regression regarding the percentage of dirty teats per farm explained 58.5 % the variance and contained four factors. Teat dipping after milking which might be associated with an overall clean and accurate management style, deep bedded cubicles, increasing cubicle maintenance times and decreasing compliance concerning total cubicle length predicted lower teat soiling. The final model concerning teat tip soiling explained 46.0 % of the variance and contained three factors. Increasing litter height in the rear part of the cubicle and increased alley soiling which is difficult to explain, predicted for less soiled teat tips, whereas increasing compliance concerning resting length was associated with higher percentages of dirty teat tips. The dependent variable ‘duration of lying down’ was analysed using again stepwise regression. The final model explained 54.8 % of the total variance. Lying down duration was significantly shorter in deep bedded cubicles. Further explanatory though not significant factors in the model were neck-rail height, deep bedding or comfort mattresses versus concrete floor or rubber mats and clearance height of side partitions. In the attempt to create a more comprehensive lying down measure, another analysis was carried out with percentage of ‘impaired lying down’ (i.e. events exceeding 6.3 seconds, with collisions or being interrupted) as dependent variable. The explanatory value of this final model was 41.3 %. An increase in partition length, in compliance concerning cubicle width and the presence of straw within bedding predicted a lower proportion of impaired lying down. The effect of partition length is difficult to interpret, but partition length and height were positively correlated on the study farms, possibly leading to a bigger zone of clear space for pelvis freedom. No associations could be found between impaired lying down and teat or teat tip soiling. Altogether, in agreement with earlier studies it was found that cubicle dimensions in practice are often inadequate with regard to the body dimensions of the cows, leading to high proportions of impaired lying down behaviour, whereas teat cleanliness is still unsatisfactory. Connections between cleanliness and cow comfort are far from simplistic. Especially the relationship between cubicle characteristics and lying down behaviour apparently is very complex, so that it is difficult to identify single influential factors that are valid for all farm situations. However, based on the results of the present study the use of deep bedded cubicles can be recommended as well as improved management with special regard to cubicle and litter maintenance in order to achieve both better resting comfort and teat cleanliness.