939 resultados para Distance between Experts’ Statements
Resumo:
Durante o processo de projeto, o arquiteto transpõe suas ideias para o campo real, do concreto. Os diversos modos de expressão e representação têm como função mediar essa interação, diminuindo a distância entre esses dois campos. Vive-se hoje, um momento de intensa transformação das estratégias projetuais, propiciadas pelos novos meios digitais. Esta pesquisa tem como objetivo o estudo do uso das representações tridimensionais, especificamente dos modelos físicos e digitais. Pretende-se flagrar os momentos contributivos dos modelos no processo projetivo e as características intrínsecas a eles. A discussão busca não apenas destacar a importância dessa ferramenta, como também, traçar uma breve comparação entre a tecnologia digital e a feitura manual. Para este trabalho foram selecionados alguns arquitetos significativos do cenário da arquitetura paulista em cujos projetos comparecem o uso de modelos. Como estudos de caso tem-se a Residência do arquiteto Marcos Acayaba e o projeto vencedor do Concurso para o Instituto Moreira Salles/ SP, do escritório Andrade Morettin Arquitetos. Soma-se a estes objetivos, a apresentação do uso de modelos físicos e digitais em uma experiência didática projetiva.
Resumo:
The pulmonary crackling and the formation of liquid bridges are problems that for centuries have been attracting the attention of scientists. In order to study these phenomena, it was developed a canonical cubic lattice-gas like model to explain the rupture of liquid bridges in lung airways [A. Alencar et al., 2006, PRE]. Here, we further develop this model and add entropy analysis to study thermodynamic properties, such as free energy and force. The simulations were performed using the Monte Carlo method with Metropolis algorithm. The exchange between gas and liquid particles were performed randomly according to the Kawasaki dynamics and weighted by the Boltzmann factor. Each particle, which can be solid (s), liquid (l) or gas (g), has 26 neighbors: 6 + 12 + 8, with distances 1, √2 and √3, respectively. The energy of a lattice's site m is calculated by the following expression: Em = ∑k=126 Ji(m)j(k) in witch (i, j) = g, l or s. Specifically, it was studied the surface free energy of the liquid bridge, trapped between two planes, when its height is changed. For that, was considered two methods. First, just the internal energy was calculated. Then was considered the entropy. It was fond no difference in the surface free energy between this two methods. We calculate the liquid bridge force between the two planes using the numerical surface free energy. This force is strong for small height, and decreases as the distance between the two planes, height, is increased. The liquid-gas system was also characterized studying the variation of internal energy and heat capacity with the temperature. For that, was performed simulation with the same proportion of liquid and gas particle, but different lattice size. The scale of the liquid-gas system was also studied, for low temperature, using different values to the interaction Jij.
Resumo:
In this work, we considered the flow around two circular cylinders of equal diameter placed in tandem with respect to the incident uniform flow. The upstream cylinder was fixed and the downstream cylinder was completely free to move in the cross-stream direction, with no spring or damper attached to it. The centre-to-centre distance between the cylinders was four diameters, and the Reynolds number was varied from 100 to 645. We performed two- and three-dimensional simulations of this flow using a Spectral/hp element method to discretise the flow equations, coupled to a simple Newmark integration routine that solves the equation of the dynamics of the cylinder. The differences of the behaviours observed in the two- and three-dimensional simulations are highlighted and the data is analysed under the light of previously published experimental results obtained for higher Reynolds numbers.
Resumo:
A complete laser cooling setup was built, with focus on threedimensional near-resonant optical lattices for cesium. These consist of regularly ordered micropotentials, created by the interference of four laser beams. One key feature of optical lattices is an inherent ”Sisyphus cooling” process. It efficiently extracts kinetic energy from the atoms, leading to equilibrium temperatures of a few µK. The corresponding kinetic energy is lower than the depth of the potential wells, so that atoms can be trapped. We performed detailed studies of the cooling processes in optical lattices by using the time-of-flight and absorption-imaging techniques. We investigated the dependence of the equilibrium temperature on the optical lattice parameters, such as detuning, optical potential and lattice geometry. The presence of neighbouring transitions in the cesium hyperfine level structure was used to break symmetries in order to identify, which role “red” and “blue” transitions play in the cooling. We also examined the limits for the cooling process in optical lattices, and the possible difference in steady-state velocity distributions for different directions. Moreover, in collaboration with ´Ecole Normale Sup´erieure in Paris, numerical simulations were performed in order to get more insight in the cooling dynamics of optical lattices. Optical lattices can keep atoms almost perfectly isolated from the environment and have therefore been suggested as a platform for a host of possible experiments aimed at coherent quantum manipulations, such as spin-squeezing and the implementation of quantum logic-gates. We developed a novel way to trap two different cesium ground states in two distinct, interpenetrating optical lattices, and to change the distance between sites of one lattice relative to sites of the other lattice. This is a first step towards the implementation of quantum simulation schemes in optical lattices.
Resumo:
Máster en Oceanografía
Resumo:
[EN] In this paper we present a variational technique for the reconstruction of 3D cylindrical surfaces. Roughly speaking by a cylindrical surface we mean a surface that can be parameterized using the projection on a cylinder in terms of two coordinates, representing the displacement and angle in a cylindrical coordinate system respectively. The starting point for our method is a set of different views of a cylindrical surface, as well as a precomputed disparity map estimation between pair of images. The proposed variational technique is based on an energy minimization where we balance on the one hand the regularity of the cylindrical function given by the distance of the surface points to cylinder axis, and on the other hand, the distance between the projection of the surface points on the images and the expected location following the precomputed disparity map estimation between pair of images. One interesting advantage of this approach is that we regularize the 3D surface by means of a bi-dimensio al minimization problem. We show some experimental results for large stereo sequences.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
The Poxviruses are a family of double stranded DNA (dsDNA) viruses that cause disease in many species, both vertebrate and invertebrate. Their genomes range in size from 135 to 365 kbp and show conservation in both organization and content. In particular, the central genomic regions of the chordopoxvirus subfamily (those capable of infecting vertebrates) contain 88 genes which are present in all the virus species characterised to date and which mostly occur in the same order and orientation. In contrast, however, the terminal regions of the genomes frequently contain genes that are species or genera-specific and that are not essential for the growth of the virus in vitro but instead often encode factors with important roles in vivo including modulation of the host immune response to infection and determination of the host range of the virus. The Parapoxviruses (PPV), of which Orf virus is the prototypic species, represent a genus within the chordopoxvirus subfamily of Poxviridae and are characterised by their ability to infect ruminants and humans. The genus currently contains four recognised species of virus, bovine papular stomatitis virus (BPSV) and pseudocowpox virus (PCPV) both of which infect cattle, orf virus (OV) that infects sheep and goats, and parapoxvirus of red deer in New Zealand (PVNZ). The ORFV genome has been fully sequenced, as has that of BPSV, and is ~138 kb in length encoding ~132 genes. The vast majority of these genes allow the virus to replicate in the cytoplasm of the infected host cell and therefore encode proteins involved in replication, transcription and metabolism of nucleic acids. These genes are well conserved between all known genera of poxviruses. There is however another class of genes, located at either end of the linear dsDNA genome, that encode proteins which are non-essential for replication and generally dictate host range and virulence of the virus. The non-essential genes are often the most variable within and between species of virus and therefore are potentially useful for diagnostic purposes. Given their role in subverting the host-immune response to infection they are also targets for novel therapeutics. The function of only a relatively small number of these proteins has been elucidated and there are several genes whose function still remains obscure principally because there is little similarity between them and proteins of known function in current sequence databases. It is thought that by selectively removing some of the virulence genes, or at least neutralising the proteins in some way, current vaccines could be improved. The evolution of poxviruses has been proposed to be an adaptive process involving frequent events of gene gain and loss, such that the virus co-evolves with its specific host. Gene capture or horizontal gene transfer from the host to the virus is considered an important source of new viral genes including those likely to be involved in host range and those enabling the virus to interfere with the host immune response to infection. Given the low rate of nucleotide substitution, recombination can be seen as an essential evolutionary driving force although it is likely underestimated. Recombination in poxviruses is intimately linked to DNA replication with both viral and cellular proteins participate in this recombination-dependent replication. It has been shown, in other poxvirus genera, that recombination between isolates and perhaps even between species does occur, thereby providing another mechanism for the acquisition of new genes and for the rapid evolution of viruses. Such events may result in viruses that have a selective advantage over others, for example in re-infections (a characteristic of the PPV), or in viruses that are able to jump the species barrier and infect new hosts. Sequence data related to viral strains isolated from goats suggest that possible recombination events may have occurred between OV and PCPV (Ueda et al. 2003). The recombination events are frequent during poxvirus replication and comparative genomic analysis of several poxvirus species has revealed that recombinations occur frequently on the right terminal region. Intraspecific recombination can occur between strains of the same PPV species, but also interspecific recombination can happen depending on enough sequence similarity to enable recombination between distinct PPV species. The most important pre-requisite for a successful recombination is the coinfection of the individual host by different virus strains or species. Consequently, the following factors affecting the distribution of different viruses to shared target cells need to be considered: dose of inoculated virus, time interval between inoculation of the first and the second virus, distance between the marker mutations, genetic homology. At present there are no available data on the replication dynamics of PPV in permissive and non permissive hosts and reguarding co-infetions there are no information on the interference mechanisms occurring during the simultaneous replication of viruses of different species. This work has been carried out to set up permissive substrates allowing the replication of different PPV species, in particular keratinocytes monolayers and organotypic skin cultures. Furthermore a method to isolate and expand ovine skin stem cells was has been set up to indeep further aspects of viral cellular tropism during natural infection. The study produced important data to elucidate the replication dynamics of OV and PCPV virus in vitro as well as the mechanisms of interference that can arise during co-infection with different viral species. Moreover, the analysis carried on the genomic right terminal region of PCPV 1303/05 contributed to a better knowledge of the viral genes involved in host interaction and pathogenesis as well as to locate recombination breakpoints and genetic homologies between PPV species. Taken together these data filled several crucial gaps for the study of interspecific recombinations of PPVs which are thought to be important for a better understanding of the viral evolution and to improve the biosafety of antiviral therapy and PPV-based vectors.
Resumo:
Experimental investigations of visible Smith-Purcell-radiation with a micro-focused high relativistic electron beam (E=855 MeV) are presented in the near region, in which the electron beam grazes the surface of the grating. The radiation intensity was measured as a function of the angle of observation and of the distance between electron beam axis and the surface of the grating simultaneously for two different wavelengths (360 nm, 546 nm).In the experiments Smith-Purcell-radiation was identified by the measured angular distribution fulfilling the characteristic coherence condition. By the observed distance dependence of the intensity two components of Smith-Purcell-radiation could be separated: one component with the theoretical predicted interaction length hint, which is produced by electrons passing over the surface of the grating, and an additional component in the near region leading to a strong enhancement of the intensity, which is produced by electrons hitting the surface. To describe the intensity of the observed additional radiation component a simple model for optical grating transition radiation, caused by the electrons passing through the grating structure, is presented. Taking into account the simple scalar model, the results of a Monte-Carlo calculation show that the additional radiation component could be explained by optical grating transition radiation.
Resumo:
We investigate a chain consisting of two coupled worm-like chains withconstant distance between the strands. The effects due todouble-strandedness of the chain are studied. In a previous analyticalstudy of this system an intrinsic twist-stretch coupling and atendency of kinking is predicted. Even though a local twist structureis observed the predicted features are not recovered. A new model for DNA at the base-pair level is presented. Thebase-pairs are treated as flat rigid ellipsoids and thesugar-phosphate backbones are represented as stiff harmonic springs.The base-pair stacking interaction is modeled by a variant of theGay-Berne potential. It is shown by systematic coarse-graininghow the elastic constants of a worm-like chain are related to thelocal fluctuations of the base-pair step parameters. Even though a lotof microscopic details of the base-pair geometry is neglected themodel can be optimized to obtain a B-DNA conformation as ground stateand reasonable elastic properties. Moreover the model allows tosimulate much larger length scales than it is possible with atomisticsimulations due to the simplification of the force-field and inparticular due to the possibility of non-local Monte-Carlo moves. Asa first application the behavior under stretching is investigated. Inagreement with micromanipulation experiments on single DNA moleculesone observes a force-plateau in the force-extension curvescorresponding to an overstretching transition from B-DNA to aso-called S-DNA state. The model suggests a structure for S-DNA withhighly inclined base-pairs in order to enable at least partialbase-pair stacking. Finally a simple model for chromatin is introduced to study itsstructural and elastic properties. The underlying geometry of themodeled fiber is based on a crossed-linker model. The chromatosomesare treated as disk-like objects. Excluded volume and short rangenucleosomal interaction are taken into account by a variant of theGay-Berne potential. It is found that the bending rigidity and thestretching modulus of the fiber increase with more compact fibers. Fora reasonable parameterization of the fiber for physiologicalconditions and sufficiently high attraction between the nucleosomes aforce-extension curve is found similar to stretching experiments onsingle chromatin fibers. For very small stretching forces a kinkedfiber forming a loop is observed. If larger forces are applied theloop formation is stretched out and a decondensation of the fibertakes place.
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
The Gulf of Aqaba represents a small scale, easy to access, regional analogue of larger oceanic oligotrophic systems. In this Gulf, the seasonal cycles of stratification and mixing drives the seasonal phytoplankton dynamics. In summer and fall, when nutrient concentrations are very low, Prochlorococcus and Synechococcus are more abundant in the surface water. This two populations are exposed to phosphate limitation. During winter mixing, when nutrient concentrations are high, Chlorophyceae and Cryptophyceae are dominant but scarce or absent during summer. In this study it was tried to develop a simulation model based on historical data to predict the phytoplankton dynamics in the northern Gulf of Aqaba. The purpose is to understand what forces operate, and how, to determine the phytoplankton dynamics in this Gulf. To make the models data sampled in two different sampling station (Fish Farm Station and Station A) were used. The data of chemical, biological and physical factors, are available from 14th January 2007 to 28th December 2009. The Fish Farm Station point was near a Fish Farm that was operational until 17th June 2008, complete closure date of the Fish Farm, about halfway through the total sampling time. The Station A sampling point is about 13 Km away from the Fish Farm Station. To build the model, the MATLAB software was used (version 7.6.0.324 R2008a), in particular a tool named Simulink. The Fish Farm Station models shows that the Fish Farm activity has altered the nutrient concentrations and as a consequence the normal phytoplankton dynamics. Despite the distance between the two sampling stations, there might be an influence from the Fish Farm activities also in the Station A ecosystem. The models about this sampling station shows that the Fish Farm impact appears to be much lower than the impact in the Fish Farm Station, because the phytoplankton dynamics appears to be driven mainly by the seasonal mixing cycle.
Resumo:
In this study, the use of the discotic liquid crystalline HBCs and conjugated polymers based on 2,7-carbazole were investigated in detail as donor materials in organic bulk-heterojunction solar cells. It has been shown that they perform efficiently in photovoltaic devices in combination with suitable acceptors. The efficiency was found to depend strongly dependent on the morphology of the film. By investigation of a series of donor materials with similar molecular structures based on both discotic molecules and conjugated polymers, a structure-performance relation was established, which is not only instructive for these materials but also serves as a guideline for improved molecular design. For the series of HBCs used in this study, it is found that the device efficiency decreases with increasing length of the alkyl substituents in the HBC. Thus, the derivative with the smallest alkyl mantle, being more crystalline compared to the HBCs with longer alkyl chains, gave the highest EQE of 12%. A large interfacial separation was found in the blend of HBC-C6,2 and PDI, since the crystallization of the acceptor occurred in a solid matrix of HBC. This led to small dispersed organized domains and benefited the charge transport. In contrast, blends of HBC-C10,6/PDI or HBC-C14,10/PDI revealed a rather homogeneous film limiting the percolation pathways due to a mixed phase. For the first time, poly(2,7-carbazole) was incorporated as a donor material in solar cells using PDI as an electron acceptor. The good fit in orbital energy levels and absorption spectra led to high efficiency. This result indicates that conjugated polymers with high band-gap can also be applied as materials to build efficient solar cells if appropriate electron acceptors are chosen. In order to enhance the light absorption ability, new ladder-type polymers based on pentaphenylene and hexaphenylene with one and three nitrogen bridges per repeat unit have been synthesized and characterized. The polymer 2 with three nitrogen bridges showed more red-shifted absorbance and emission and better packing in the solid-state than the analogous polymer 3 with only one nitrogen bridge per monomer unit. An overall efficiency as high as 1.3% under solar light was obtained for the device based on 1 and PDI, compared with 0.7% for the PCz based device. Therefore, the device performance correlates to a large extent with the solar light absorption ability and the lateral distance between conjugated polymer chains. Since the lateral distance is determined by the length and number of attached alkyl side chains, it is possible to assume that these substituents insulate the charge carrier pathways and decrease the device performance. As an additional consequence, the active semiconductor is diluted in the insulating matrix leading to a lower light absorption. This work suggests ways to improve device performance by molecular design, viz. maintaining the HOMO level while bathochromically shifting the absorption by adopting a more rigid ladder-type structure. Also, a high ratio of nitrogen bridges with small alkyl substituents was a desirable feature both in terms of adjusting the absorption and maintaining a low lateral inter-chain separation, which was necessary for obtaining high current and efficiency values.