904 resultados para Complex systems
Resumo:
A Multi-Objective Antenna Placement Genetic Algorithm (MO-APGA) has been proposed for the synthesis of matched antenna arrays on complex platforms. The total number of antennas required, their position on the platform, location of loads, loading circuit parameters, decoupling and matching network topology, matching network parameters and feed network parameters are optimized simultaneously. The optimization goal was to provide a given minimum gain, specific gain discrimination between the main and back lobes and broadband performance. This algorithm is developed based on the non-dominated sorting genetic algorithm (NSGA-II) and Minimum Spanning Tree (MST) technique for producing diverse solutions when the number of objectives is increased beyond two. The proposed method is validated through the design of a wideband airborne SAR
Design and study of self-assembled functional organic and hybrid systems for biological applications
Resumo:
The focus of self-assembly as a strategy for the synthesis has been confined largely to molecules, because of the importance of manipulating the structure of matter at the molecular scale. We have investigated the influence of temperature and pH, in addition to the concentration of the capping agent used for the formation of the nano-bio conjugates. For example, the formation of the narrower size distribution of the nanoparticles was observed with the increase in the concentration of the protein, which supports the fact that γ-globulin acts both as a controller of nucleation as well as stabiliser. As analyzed through various photophysical, biophysical and microscopic techniques such as TEM, AFM, C-AFM, SEM, DLS, OPM, CD and FTIR, we observed that the initial photoactivation of γ-globulin at pH 12 for 3 h resulted in small protein fibres of ca. Further irradiation for 24 h, led to the formation of selfassembled long fibres of the protein of ca. 5-6 nm and observation of surface plasmon resonance band at around 520 nm with the concomitant quenching of luminescence intensity at 680 nm. The observation of light triggered self-assembly of the protein and its effect on controlling the fate of the anchored nanoparticles can be compared with the naturally occurring process such as photomorphogenesis.Furthermore,our approach offers a way to understand the role played by the self-assembly of the protein in ordering and knock out of the metal nanoparticles and also in the design of nano-biohybrid materials for medicinal and optoelectronic applications. Investigation of the potential applications of NIR absorbing and water soluble squaraine dyes 1-3 for protein labeling and anti-amyloid agents forms the subject matter of the third chapter of the thesis. The study of their interactions with various proteins revealed that 1-3 showed unique interactions towards serum albumins as well as lysozyme. 69%, 71% and 49% in the absorption spectra as well as significant quenching in the fluorescence intensity of the dyes 1-3, respectively. Half-reciprocal analysis of the absorption data and isothermal titration calorimetric (ITC) analysis of the titration experiments gave a 1:1 stoichiometry for the complexes formed between the lysozyme and squaraine dyes with association constants (Kass) in the range 104-105 M-1. We have determined the changes in the free energy (ΔG) for the complex formation and the values are found to be -30.78, -32.31 and -28.58 kJmol-1, respectively for the dyes 1, 2 and 3. Furthermore, we have observed a strong induced CD (ICD) signal corresponding to the squaraine chromophore in the case of the halogenated squaraine dyes 2 and 3 at 636 and 637 nm confirming the complex formation in these cases. To understand the nature of interaction of the squaraine dyes 1-3 with lysozyme, we have investigated the interaction of dyes 1-3 with different amino acids. These results indicated that the dyes 1-3 showed significant interactions with cysteine and glutamic acid which are present in the side chains of lysozyme. In addition the temperature dependent studies have revealed that the interaction of the dye and the lysozyme are irreversible. Furthermore, we have investigated the interactions of these NIR dyes 1-3 with β- amyloid fibres derived from lysozyme to evaluate their potential as inhibitors of this biologically important protein aggregation. These β-amyloid fibrils were insoluble protein aggregates that have been associated with a range of neurodegenerative diseases, including Huntington, Alzheimer’s, Parkinson’s, and Creutzfeldt-Jakob diseases. We have synthesized amyloid fibres from lysozyme through its incubation in acidic solution below pH 4 and by allowing to form amyloid fibres at elevated temperature. To quantify the binding affinities of the squaraine dyes 1-3 with β-amyloids, we have carried out the isothermal titration calorimetric (ITC) measurements. The association constants were determined and are found to be 1.2 × 105, 3.6× 105 and 3.2 × 105 M-1 for the dyes, 1-3, respectively. To gain more insights into the amyloid inhibiting nature of the squaraine dyes under investigations, we have carried out thioflavin assay, CD, isothermal titration calorimetry and microscopic analysis. The addition of the dyes 1-3 (5μM) led to the complete quenching in the apparent thioflavin fluorescence, thereby indicating the destabilization of β-amyloid fibres in the presence of the squaraine dyes. Further, the inhibition of the amyloid fibres by the squaraine dyes 1-3, has been evidenced though the DLS, TEM AFM and SAED, wherein we observed the complete destabilization of the amyloid fibre and transformation of the fibre into spherical particles of ca. These results demonstrate the fact that the squaraine dyes 1-3 can act as protein labeling agents as well as the inhibitors of the protein amyloidogenesis. The last chapter of the thesis describes the synthesis and investigation of selfassembly as well as bio-imaging aspects of a few novel tetraphenylethene conjugates 4-6.Expectedly, these conjugates showed significant solvatochromism and exhibited a hypsochromic shift (negative solvatochromism) as the solvent polarity increased, and these observations were justified though theoretical studies employing the B3LYP/6-31g method. We have investigated the self-assembly properties of these D-A conjugates though variation in the percentage of water in acetonitrile solution due to the formation of nanoaggregates. Further the contour map of the observed fluorescence intensity as a function of the fluorescence excitation and emission wavelength confirmed the formation of J-type aggregates in these cases. To have a better understanding of the type of self-assemblies formed from the TPE conjugates 4-6, we have carried out the morphological analysis through various microscopic techniques such as DLS, SEM and TEM. 70%, we observed rod shape architectures having ~ 780 nm in diameter and ~ 12 μM in length as evidenced through TEM and SEM analysis. We have made similar observations with the dodecyl conjugate 5 at ca. 70% and 50% water/acetonitrile mixtures, the aggregates formed from 4 and 5 were found to be highly crystalline and such structures were transformed to amorphous nature as the water fraction was increased to 99%. To evaluate the potential of the conjugate as bio-imaging agents, we have carried out their in vitro cytotoxicity and cellular uptake studies though MTT assay, flow cytometric and confocal laser scanning microscopic techniques. Thus nanoparticle of these conjugates which exhibited efficient emission, large stoke shift, good stability, biocompatibility and excellent cellular imaging properties can have potential applications for tracking cells as well as in cell-based therapies. In summary we have synthesized novel functional organic chromophores and have studied systematic investigation of self-assembly of these synthetic and biological building blocks under a variety of conditions. The investigation of interaction of water soluble NIR squaraine dyes with lysozyme indicates that these dyes can act as the protein labeling agents and the efficiency of inhibition of β-amyloid indicate, thereby their potential as anti-amyloid agents.
Resumo:
The study covers theFishing capture technology innovation includes the catching of aquatic animal, using any kind of gear techniques, operated from a vessel. Utilization of fishing techniques varies, depending upon the type of fisheries, and can go from a basic and little hook connected to a line to huge and complex mid water trawls or seines operated by large fishing vessels.The size and autonomy of a fishing vessel is largely determined by its ability to handle, process and store fish in good condition on board, and thus these two characteristics have been greatly influenced by the introduction and utilization of ice and refrigeration machinery. Other technological developments especially hydraulic hauling machinery, fish finding electronics and synthetic twines have also had a major impact on the efficiency and profitability of fishing vessels.A wide variety of fishing gears and practices ranging from small-scale artisanal to advanced mechanised systems are used for fish capture in Kerala. Most important among these fishing gears are trawls, seines, lines, gillnets and entangling nets and traps The modern sector was introduced in 1953 at Neendakara, Shakthikulangara region under the initiative of Indo-Norwegian project (INP). The novel facilities introduced in fishing industry by Indo- Norwegian project accordingly are mechanically operated new boats with new fishing nets. Soon after mechanization, motorization programme gained momentum in Kerala especially in Alleppey, Ernakulam and Kollam districts.
Resumo:
The identification of chemical mechanism that can exhibit oscillatory phenomena in reaction networks are currently of intense interest. In particular, the parametric question of the existence of Hopf bifurcations has gained increasing popularity due to its relation to the oscillatory behavior around the fixed points. However, the detection of oscillations in high-dimensional systems and systems with constraints by the available symbolic methods has proven to be difficult. The development of new efficient methods are therefore required to tackle the complexity caused by the high-dimensionality and non-linearity of these systems. In this thesis, we mainly present efficient algorithmic methods to detect Hopf bifurcation fixed points in (bio)-chemical reaction networks with symbolic rate constants, thereby yielding information about their oscillatory behavior of the networks. The methods use the representations of the systems on convex coordinates that arise from stoichiometric network analysis. One of the methods called HoCoQ reduces the problem of determining the existence of Hopf bifurcation fixed points to a first-order formula over the ordered field of the reals that can then be solved using computational-logic packages. The second method called HoCaT uses ideas from tropical geometry to formulate a more efficient method that is incomplete in theory but worked very well for the attempted high-dimensional models involving more than 20 chemical species. The instability of reaction networks may lead to the oscillatory behaviour. Therefore, we investigate some criterions for their stability using convex coordinates and quantifier elimination techniques. We also study Muldowney's extension of the classical Bendixson-Dulac criterion for excluding periodic orbits to higher dimensions for polynomial vector fields and we discuss the use of simple conservation constraints and the use of parametric constraints for describing simple convex polytopes on which periodic orbits can be excluded by Muldowney's criteria. All developed algorithms have been integrated into a common software framework called PoCaB (platform to explore bio- chemical reaction networks by algebraic methods) allowing for automated computation workflows from the problem descriptions. PoCaB also contains a database for the algebraic entities computed from the models of chemical reaction networks.
Resumo:
Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.
Resumo:
This thesis describes a methodology, a representation, and an implemented program for troubleshooting digital circuit boards at roughly the level of expertise one might expect in a human novice. Existing methods for model-based troubleshooting have not scaled up to deal with complex circuits, in part because traditional circuit models do not explicitly represent aspects of the device that troubleshooters would consider important. For complex devices the model of the target device should be constructed with the goal of troubleshooting explicitly in mind. Given that methodology, the principal contributions of the thesis are ways of representing complex circuits to help make troubleshooting feasible. Temporally coarse behavior descriptions are a particularly powerful simplification. Instantiating this idea for the circuit domain produces a vocabulary for describing digital signals. The vocabulary has a level of temporal detail sufficient to make useful predictions abut the response of the circuit while it remains coarse enough to make those predictions computationally tractable. Other contributions are principles for using these representations. Although not embodied in a program, these principles are sufficiently concrete that models can be constructed manually from existing circuit descriptions such as schematics, part specifications, and state diagrams. One such principle is that if there are components with particularly likely failure modes or failure modes in which their behavior is drastically simplified, this knowledge should be incorporated into the model. Further contributions include the solution of technical problems resulting from the use of explicit temporal representations and design descriptions with tangled hierarchies.
Resumo:
Expert supervision systems are software applications specially designed to automate process monitoring. The goal is to reduce the dependency on human operators to assure the correct operation of a process including faulty situations. Construction of this kind of application involves an important task of design and development in order to represent and to manipulate process data and behaviour at different degrees of abstraction for interfacing with data acquisition systems connected to the process. This is an open problem that becomes more complex with the number of variables, parameters and relations to account for the complexity of the process. Multiple specialised modules tuned to solve simpler tasks that operate under a co-ordination provide a solution. A modular architecture based on concepts of software agents, taking advantage of the integration of diverse knowledge-based techniques, is proposed for this purpose. The components (software agents, communication mechanisms and perception/action mechanisms) are based on ICa (Intelligent Control architecture), software middleware supporting the build-up of applications with software agent features
Resumo:
The Networks and Complexity in Social Systems course commences with an overview of the nascent field of complex networks, dividing it into three related but distinct strands: Statistical description of large scale networks, viewed as static objects; the dynamic evolution of networks, where now the structure of the network is understood in terms of a growth process; and dynamical processes that take place on fixed networks; that is, "networked dynamical systems". (A fourth area of potential research ties all the previous three strands together under the rubric of co-evolution of networks and dynamics, but very little research has been done in this vein and so it is omitted.) The remainder of the course treats each of the three strands in greater detail, introducing technical knowledge as required, summarizing the research papers that have introduced the principal ideas, and pointing out directions for future development. With regard to networked dynamical systems, the course treats in detail the more specific topic of information propagation in networks, in part because this topic is of great relevance to social science, and in part because it has received the most attention in the literature to date.
Resumo:
The Integrated Mass Transit Systems are an initiative of the Colombian Government to replicate the experience of Bogota’s Bus Rapid Transit System —Transmilenio— in large urban areas of the country, most of them over municipal perimeters to provide transportation services to areas undergoing a metropolization process. Management of these large scale metropolitan infrastructure projects involves complex setups that present new challenges in the interaction between stakeholders and interests between municipalities, tiers of government and public and private sectors. This article presents a compilation of the management process of these projects from the national context, based on a document review of the regulatory framework, complemented by interviews with key stakeholders at the national level. Research suggests that the implementation of large-scale metropolitan projects requires a management framework orientated to overcome the traditional tensions between centralism and municipal autonomy.
Resumo:
Aquesta tesi s'emmarca dins del projecte CICYT TAP 1999-0443-C05-01. L'objectiu d'aquest projecte és el disseny, implementació i avaluació de robots mòbils, amb un sistema de control distribuït, sistemes de sensorització i xarxa de comunicacions per realitzar tasques de vigilància. Els robots han de poder-se moure per un entorn reconeixent la posició i orientació dels diferents objectes que l'envolten. Aquesta informació ha de permetre al robot localitzar-se dins de l'entorn on es troba per poder-se moure evitant els possibles obstacles i dur a terme la tasca encomanada. El robot ha de generar un mapa dinàmic de l'entorn que serà utilitzat per localitzar la seva posició. L'objectiu principal d'aquest projecte és aconseguir que un robot explori i construeixi un mapa de l'entorn sense la necessitat de modificar el propi entorn. Aquesta tesi està enfocada en l'estudi de la geometria dels sistemes de visió estereoscòpics formats per dues càmeres amb l'objectiu d'obtenir informació geomètrica 3D de l'entorn d'un vehicle. Aquest objectiu tracta de l'estudi del modelatge i la calibració de càmeres i en la comprensió de la geometria epipolar. Aquesta geometria està continguda en el que s'anomena emph{matriu fonamental}. Cal realitzar un estudi del càlcul de la matriu fonamental d'un sistema estereoscòpic amb la finalitat de reduir el problema de la correspondència entre dos plans imatge. Un altre objectiu és estudiar els mètodes d'estimació del moviment basats en la geometria epipolar diferencial per tal de percebre el moviment del robot i obtenir-ne la posició. Els estudis de la geometria que envolta els sistemes de visió estereoscòpics ens permeten presentar un sistema de visió per computador muntat en un robot mòbil que navega en un entorn desconegut. El sistema fa que el robot sigui capaç de generar un mapa dinàmic de l'entorn a mesura que es desplaça i determinar quin ha estat el moviment del robot per tal de emph{localitzar-se} dins del mapa. La tesi presenta un estudi comparatiu dels mètodes de calibració de càmeres més utilitzats en les últimes dècades. Aquestes tècniques cobreixen un gran ventall dels mètodes de calibració clàssics. Aquest mètodes permeten estimar els paràmetres de la càmera a partir d'un conjunt de punts 3D i de les seves corresponents projeccions 2D en una imatge. Per tant, aquest estudi descriu un total de cinc tècniques de calibració diferents que inclouen la calibració implicita respecte l'explicita i calibració lineal respecte no lineal. Cal remarcar que s'ha fet un gran esforç en utilitzar la mateixa nomenclatura i s'ha estandaritzat la notació en totes les tècniques presentades. Aquesta és una de les dificultats principals a l'hora de poder comparar les tècniques de calibració ja què cada autor defineix diferents sistemes de coordenades i diferents conjunts de paràmetres. El lector és introduït a la calibració de càmeres amb la tècnica lineal i implícita proposada per Hall i amb la tècnica lineal i explicita proposada per Faugeras-Toscani. A continuació es passa a descriure el mètode a de Faugeras incloent el modelatge de la distorsió de les lents de forma radial. Seguidament es descriu el conegut mètode proposat per Tsai, i finalment es realitza una descripció detallada del mètode de calibració proposat per Weng. Tots els mètodes són comparats tant des del punt de vista de model de càmera utilitzat com de la precisió de la calibració. S'han implementat tots aquests mètodes i s'ha analitzat la precisió presentant resultats obtinguts tant utilitzant dades sintètiques com càmeres reals. Calibrant cada una de les càmeres del sistema estereoscòpic es poden establir un conjunt de restriccions geomètri ques entre les dues imatges. Aquestes relacions són el que s'anomena geometria epipolar i estan contingudes en la matriu fonamental. Coneixent la geometria epipolar es pot: simplificar el problema de la correspondència reduint l'espai de cerca a llarg d'una línia epipolar; estimar el moviment d'una càmera quan aquesta està muntada sobre un robot mòbil per realitzar tasques de seguiment o de navegació; reconstruir una escena per aplicacions d'inspecció, propotipatge o generació de motlles. La matriu fonamental s'estima a partir d'un conjunt de punts en una imatges i les seves correspondències en una segona imatge. La tesi presenta un estat de l'art de les tècniques d'estimació de la matriu fonamental. Comença pels mètode lineals com el dels set punts o el mètode dels vuit punts, passa pels mètodes iteratius com el mètode basat en el gradient o el CFNS, fins arribar las mètodes robustos com el M-Estimators, el LMedS o el RANSAC. En aquest treball es descriuen fins a 15 mètodes amb 19 implementacions diferents. Aquestes tècniques són comparades tant des del punt de vista algorísmic com des del punt de vista de la precisió que obtenen. Es presenten el resultats obtinguts tant amb imatges reals com amb imatges sintètiques amb diferents nivells de soroll i amb diferent quantitat de falses correspondències. Tradicionalment, l'estimació del moviment d'una càmera està basada en l'aplicació de la geometria epipolar entre cada dues imatges consecutives. No obstant el cas tradicional de la geometria epipolar té algunes limitacions en el cas d'una càmera situada en un robot mòbil. Les diferencies entre dues imatges consecutives són molt petites cosa que provoca inexactituds en el càlcul de matriu fonamental. A més cal resoldre el problema de la correspondència, aquest procés és molt costós en quant a temps de computació i no és gaire efectiu per aplicacions de temps real. En aquestes circumstàncies les tècniques d'estimació del moviment d'una càmera solen basar-se en el flux òptic i en la geometria epipolar diferencial. En la tesi es realitza un recull de totes aquestes tècniques degudament classificades. Aquests mètodes són descrits unificant la notació emprada i es remarquen les semblances i les diferencies entre el cas discret i el cas diferencial de la geometria epipolar. Per tal de poder aplicar aquests mètodes a l'estimació de moviment d'un robot mòbil, aquest mètodes generals que estimen el moviment d'una càmera amb sis graus de llibertat, han estat adaptats al cas d'un robot mòbil que es desplaça en una superfície plana. Es presenten els resultats obtinguts tant amb el mètodes generals de sis graus de llibertat com amb els adaptats a un robot mòbil utilitzant dades sintètiques i seqüències d'imatges reals. Aquest tesi finalitza amb una proposta de sistema de localització i de construcció d'un mapa fent servir un sistema estereoscòpic situat en un robot mòbil. Diverses aplicacions de robòtica mòbil requereixen d'un sistema de localització amb l'objectiu de facilitar la navegació del vehicle i l'execució del les trajectòries planificades. La localització es sempre relativa al mapa de l'entorn on el robot s'està movent. La construcció de mapes en un entorn desconegut és una tasca important a realitzar per les futures generacions de robots mòbils. El sistema que es presenta realitza la localització i construeix el mapa de l'entorn de forma simultània. A la tesi es descriu el robot mòbil GRILL, que ha estat la plataforma de treball emprada per aquesta aplicació, amb el sistema de visió estereoscòpic que s'ha dissenyat i s'ha muntat en el robot. També es descriu tots el processos que intervenen en el sistema de localització i construcció del mapa. La implementació d'aquest processos ha estat possible gràcies als estudis realitzats i presentats prèviament (calibració de càmeres, estimació de la matriu fonamental, i estimació del moviment) sense els quals no s'hauria pogut plantejar aquest sistema. Finalment es presenten els mapes en diverses trajectòries realitzades pel robot GRILL en el laboratori. Les principals contribucions d'aquest treball són: ·Un estat de l'art sobre mètodes de calibració de càmeres. El mètodes són comparats tan des del punt de vista del model de càmera utilitzat com de la precisió dels mètodes. ·Un estudi dels mètodes d'estimació de la matriu fonamental. Totes les tècniques estudiades són classificades i descrites des d'un punt de vista algorísmic. ·Un recull de les tècniques d'estimació del moviment d'una càmera centrat en el mètodes basat en la geometria epipolar diferencial. Aquestes tècniques han estat adaptades per tal d'estimar el moviment d'un robot mòbil. ·Una aplicació de robòtica mòbil per tal de construir un mapa dinàmic de l'entorn i localitzar-se per mitja d'un sistema estereoscòpic. L'aplicació presentada es descriu tant des del punt de vista del maquinari com del programari que s'ha dissenyat i implementat.
Resumo:
The evaluation of long-term care (LTC) systems carried out in Work Package 7 of the ANCIEN project shows which performance criteria are important and – based on the available information – how European countries score on those criteria. This paper summarises the results and discusses the policy implications. An overall evaluation was carried out for four representative countries: Germany, the Netherlands, Spain and Poland. Of the four countries, the Dutch system has the highest scores on quality of life of LTC users, quality of care and equity of the LTC system, and it performs the secondbest after Poland in terms of the total burden of care (consisting of the financial burden and the burden of informal caregiving). The German system has somewhat lower scores than the Dutch on all four dimensions. The Polish system excels in having a low total burden of care, but it scores the lowest on quality of care and equity. The Spanish system has few extreme scores. Some important lessons are the following. The performance of a LTC system is a complex concept where many dimensions have to be included. Specifically, the impact of informal caregiving on the caregivers and on society should not be forgotten. The role of the state in funding and organising LTC versus individual responsibilities is one of the most important differences among countries. Choices concerning private funding and the role of informal care have a large effect not only on the public expenditures but also on the fairness of the system. International research into the relative preferences for the different performance criteria could produce a sound basis for the weights used in the overall evaluation.
Resumo:
Biodiversity-ecosystem functioning theory would predict that increasing natural enemy richness should enhance prey consumption rate due to functional complementarity of enemy species. However, several studies show that ecological interactions among natural enemies may result in complex effects of enemy diversity on prey consumption. Therefore, the challenge in understanding natural enemy diversity effects is to predict consumption rates of multiple enemies taking into account effects arising from patterns of prey use together with species interactions. Here, we show how complementary and redundant prey use patterns result in additive and saturating effects, respectively, and how ecological interactions such as phenotypic niche shifts, synergy and intraguild predation enlarge the range of outcomes to include null, synergistic and antagonistic effects. This study provides a simple theoretical framework that can be applied to experimental studies to infer the biological mechanisms underlying natural enemy diversity effects on prey.
Resumo:
Two polymeric azido bridged complexes [Ni2L2(N-3)(3)](n)(ClO4). (1) and [Cu(bpdS)(2)(N-3)],(ClO4),(H2O)(2.5n) (2) [L = Schiff base, obtained from the condensation of pyridine-2-aldehyde with N,N,2,2-tetramethyl-1,3-propanediamine; bpds = 4,4'-bipyridyl disulfide] have been synthesized and their crystal structures have been determined. Complex 1, C26H42ClN15Ni2O4, crystallizes in a triclinic system, space group P1 with a 8.089(13), b = 9.392(14), c = 12.267(18) angstrom, a = 107.28(l), b 95.95(1), gamma = 96.92(1)degrees and Z = 2; complex 2, C20H21ClCuN7O6.5S4, crystallizes in an orthorhombic system, space group Pnna with a = 10.839(14), b = 13.208(17), c = 19.75(2) angstrom and Z = 4. The crystal structure of I consists of 1D polymers of nickel(L) units, alternatively connected by single and double bridging mu-(1,3-N-3) ligand with isolated perchlorate anions. Variable temperature magnetic susceptibility data of the complex have been measured and the fitting,of magnetic data was carried out applying the Borris-Almenar formula for such types of alternating one-dimensional S = 1 systems, based on the Hamiltonian H = -J Sigma(S2iS2i-1 + aS(2i)S(2i+1)). The best-fit parameters obtained are J = -106.7 +/- 2 cm(-1); a = 0.82 +/- 0.02; g = 2.21 +/- 0.02. Complex 2 is a 2D network of 4,4 topology with the nodes occupied by the Cu-II ions, and the edges formed by single azide and double bpds connectors. The perchlorate anions are located between pairs of bpds. The magnetic data have been fitted considering the complex as a pseudo-one-dimensional system, with all copper((II)) atoms linked by [mu(1,3-azido) bridging ligands at axial positions (long Cu...N-3 distances) since the coupling through long bpds is almost nil. The best-fit parameters obtained with this model are J = -1.21 +/- 0.2 cm(-1), g 2.14 +/- 0.02. (c) Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2005).
Resumo:
This paper describes the development and validation of a novel web-based interface for the gathering of feedback from building occupants about their environmental discomfort including signs of Sick Building Syndrome (SBS). The gathering of such feedback may enable better targeting of environmental discomfort down to the individual as well as the early detection and subsequently resolution by building services of more complex issues such as SBS. The occupant's discomfort is interpreted and converted to air-conditioning system set points using Fuzzy Logic. Experimental results from a multi-zone air-conditioning test rig have been included in this paper.
Resumo:
Semiotics is the study of signs. Application of semiotics in information systems design is based on the notion that information systems are organizations within which agents deploy signs in the form of actions according to a set of norms. An analysis of the relationships among the agents, their actions and the norms would give a better specification of the system. Distributed multimedia systems (DMMS) could be viewed as a system consisted of many dynamic, self-controlled normative agents engaging in complex interaction and processing of multimedia information. This paper reports the work of applying the semiotic approach to the design and modeling of DMMS, with emphasis on using semantic analysis under the semiotic framework. A semantic model of DMMS describing various components and their ontological dependencies is presented, which then serves as a design model and implemented in a semantic database. Benefits of using the semantic database are discussed with reference to various design scenarios.