922 resultados para binary logic


Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]In this work local binary patterns based focus measures are presented. Local binary patterns (LBP) have been introduced in computer vision tasks like texture classification or face recognition. In applications where recognition is based on LBP, a computational saving can be achieved with the use of LBP in the focus measures. The behavior of the proposed measures is studied to test if they fulfill the properties of the focus measures and then a comparison with some well know focus measures is carried out in different scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The elusive fiction of J. M. Coetzee is not a work in which you can read fixed ethical stances. I suggest testing the potentialities of a logic based on frames and double binds in Coetzee's novels. A double bind is a dilemma in communication which consists on tho conflicting messages, with the result that you can’t successfully respond to neither. Jacques Derrida highlighted the strategic value of a way of thinking based on the double bind (but on frames as well), which enables to escape binary thinking and so it opens an ethical space, where you can make a choice out of a set of fixed rules and take responsibility for it. In Coetzee’s fiction the author himself can be considered in a double bind, seeing that he is a white South African writer who feels that his “task” can’t be as simply as choosing to represent faithfully the violence and the racism of the apartheid or of choosing to give a voice to the oppressed. Good intentions alone do not ensure protection against entering unwittingly into complicity with the dominant discourse, and this is why is important to make the frame in which one is always situated clearly visible and explicit. The logic of the double bind becomes the way in which moral problem are staged in Coetzee’s fiction as well: the opportunity to give a voice to the oppressed through the same language which co-opted to serve the cause of oppression, a relation with the otherness never completed, or the representability of evil in literature, of the secret and of the paradoxical implications of confession and forgiveness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advent of distributed and heterogeneous systems has laid the foundation for the birth of new architectural paradigms, in which many separated and autonomous entities collaborate and interact to the aim of achieving complex strategic goals, impossible to be accomplished on their own. A non exhaustive list of systems targeted by such paradigms includes Business Process Management, Clinical Guidelines and Careflow Protocols, Service-Oriented and Multi-Agent Systems. It is largely recognized that engineering these systems requires novel modeling techniques. In particular, many authors are claiming that an open, declarative perspective is needed to complement the closed, procedural nature of the state of the art specification languages. For example, the ConDec language has been recently proposed to target the declarative and open specification of Business Processes, overcoming the over-specification and over-constraining issues of classical procedural approaches. On the one hand, the success of such novel modeling languages strongly depends on their usability by non-IT savvy: they must provide an appealing, intuitive graphical front-end. On the other hand, they must be prone to verification, in order to guarantee the trustworthiness and reliability of the developed model, as well as to ensure that the actual executions of the system effectively comply with it. In this dissertation, we claim that Computational Logic is a suitable framework for dealing with the specification, verification, execution, monitoring and analysis of these systems. We propose to adopt an extended version of the ConDec language for specifying interaction models with a declarative, open flavor. We show how all the (extended) ConDec constructs can be automatically translated to the CLIMB Computational Logic-based language, and illustrate how its corresponding reasoning techniques can be successfully exploited to provide support and verification capabilities along the whole life cycle of the targeted systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human reasoning is a fascinating and complex cognitive process that can be applied in different research areas such as philosophy, psychology, laws and financial. Unfortunately, developing supporting software (to those different areas) able to cope such as complex reasoning it’s difficult and requires a suitable logic abstract formalism. In this thesis we aim to develop a program, that has the job to evaluate a theory (a set of rules) w.r.t. a Goal, and provide some results such as “The Goal is derivable from the KB5 (of the theory)”. In order to achieve this goal we need to analyse different logics and choose the one that best meets our needs. In logic, usually, we try to determine if a given conclusion is logically implied by a set of assumptions T (theory). However, when we deal with programming logic we need an efficient algorithm in order to find such implications. In this work we use a logic rather similar to human logic. Indeed, human reasoning requires an extension of the first order logic able to reach a conclusion depending on not definitely true6 premises belonging to a incomplete set of knowledge. Thus, we implemented a defeasible logic7 framework able to manipulate defeasible rules. Defeasible logic is a non-monotonic logic designed for efficient defeasible reasoning by Nute (see Chapter 2). Those kind of applications are useful in laws area especially if they offer an implementation of an argumentation framework that provides a formal modelling of game. Roughly speaking, let the theory is the set of laws, a keyclaim is the conclusion that one of the party wants to prove (and the other one wants to defeat) and adding dynamic assertion of rules, namely, facts putted forward by the parties, then, we can play an argumentative challenge between two players and decide if the conclusion is provable or not depending on the different strategies performed by the players. Implementing a game model requires one more meta-interpreter able to evaluate the defeasible logic framework; indeed, according to Göedel theorem (see on page 127), we cannot evaluate the meaning of a language using the tools provided by the language itself, but we need a meta-language able to manipulate the object language8. Thus, rather than a simple meta-interpreter, we propose a Meta-level containing different Meta-evaluators. The former has been explained above, the second one is needed to perform the game model, and the last one will be used to change game execution and tree derivation strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditional logic gates are rapidly reaching the limits of miniaturization. Overheating of these components is no longer negligible. A new physical approach to the machine was proposed by Prof. C S. Lent “Molecular Quantum cellular automata”. Indeed the quantum-dot cellular automata (QCA) approach offers an attractive alternative to diode or transistor devices. Th units encode binary information by two polarizations without corrent flow. The units for QCA theory are called QCA cells and can be realized in several way. Molecules can act as QCA cells at room temperature. In collaboration with STMicroelectronic, the group of Electrochemistry of Prof. Paolucci and the Nananotecnology laboratory from Lecce, we synthesized and studied with many techniques surface-active chiral bis-ferrocenes, conveniently designed in order to act as prototypical units for molecular computing devices. The chemistry of ferrocene has been studied thoroughly and found the opportunity to promote substitution reaction of a ferrocenyl alcohols with various nucleophiles without the aid of Lewis acid as catalysts. The only interaction between water and the two reagents is involve in the formation of a carbocation specie which is the true reactive species. We have generalized this concept to other benzyl alcohols which generating stabilized carbocations. Carbocation describe in Mayr’s scale were fondametal for our research. Finally, we used these alcohols to alkylate in enantioselective way aldehydes via organocatalysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Synthetic biology has recently had a great development, many papers have been published and many applications have been presented, spanning from the production of biopharmacheuticals to the synthesis of bioenergetic substrates or industrial catalysts. But, despite these advances, most of the applications are quite simple and don’t fully exploit the potential of this discipline. This limitation in complexity has many causes, like the incomplete characterization of some components, or the intrinsic variability of the biological systems, but one of the most important reasons is the incapability of the cell to sustain the additional metabolic burden introduced by a complex circuit. The objective of the project, of which this work is part, is trying to solve this problem through the engineering of a multicellular behaviour in prokaryotic cells. This system will introduce a cooperative behaviour that will allow to implement complex functionalities, that can’t be obtained with a single cell. In particular the goal is to implement the Leader Election, this procedure has been firstly devised in the field of distributed computing, to identify the process that allow to identify a single process as organizer and coordinator of a series of tasks assigned to the whole population. The election of the Leader greatly simplifies the computation providing a centralized control. Further- more this system may even be useful to evolutionary studies that aims to explain how complex organisms evolved from unicellular systems. The work presented here describes, in particular, the design and the experimental characterization of a component of the circuit that solves the Leader Election problem. This module, composed of an hybrid promoter and a gene, is activated in the non-leader cells after receiving the signal that a leader is present in the colony. The most important element, in this case, is the hybrid promoter, it has been realized in different versions, applying the heuristic rules stated in [22], and their activity has been experimentally tested. The objective of the experimental characterization was to test the response of the genetic circuit to the introduction, in the cellular environment, of particular molecules, inducers, that can be considered inputs of the system. The desired behaviour is similar to the one of a logic AND gate in which the exit, represented by the luminous signal produced by a fluorescent protein, is one only in presence of both inducers. The robustness and the stability of this behaviour have been tested by changing the concentration of the input signals and building dose response curves. From these data it is possible to conclude that the analysed constructs have an AND-like behaviour over a wide range of inducers’ concentrations, even if it is possible to identify many differences in the expression profiles of the different constructs. This variability accounts for the fact that the input and the output signals are continuous, and so their binary representation isn’t able to capture the complexity of the behaviour. The module of the circuit that has been considered in this analysis has a fundamental role in the realization of the intercellular communication system that is necessary for the cooperative behaviour to take place. For this reason, the second phase of the characterization has been focused on the analysis of the signal transmission. In particular, the interaction between this element and the one that is responsible for emitting the chemical signal has been tested. The desired behaviour is still similar to a logic AND, since, even in this case, the exit signal is determined by the hybrid promoter activity. The experimental results have demonstrated that the systems behave correctly, even if there is still a substantial variability between them. The dose response curves highlighted that stricter constrains on the inducers concentrations need to be imposed in order to obtain a clear separation between the two levels of expression. In the conclusive chapter the DNA sequences of the hybrid promoters are analysed, trying to identify the regulatory elements that are most important for the determination of the gene expression. Given the available data it wasn’t possible to draw definitive conclusions. In the end, few considerations on promoter engineering and complex circuits realization are presented. This section aims to briefly recall some of the problems outlined in the introduction and provide a few possible solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Implicazioni tettoniche ed estetiche delle logiche monoscocca integrate e stress lines analysis in architettura.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Millisecond Pulsars (MSPs) are fast rotating, highly magnetized neutron stars. According to the "canonical recycling scenario", MSPs form in binary systems containing a neutron star which is spun up through mass accretion from the evolving companion. Therefore, the final stage consists of a binary made of a MSP and the core of the deeply peeled companion. In the last years, however an increasing number of systems deviating from these expectations has been discovered, thus strongly indicating that our understanding of MSPs is far to be complete. The identification of the optical companions to binary MSPs is crucial to constrain the formation and evolution of these objects. In dense environments such as Globular Clusters (GCs), it also allows us to get insights on the cluster internal dynamics. By using deep photometric data, acquired both from space and ground-based telescopes, we identified 5 new companions to MSPs. Three of them being located in GCs and two in the Galactic Field. The three new identifications in GCs increased by 50% the number of such objects known before this Thesis. They all are non-degenerate stars, at odds with the expectations of the "canonical recycling scenario". These results therefore suggest either that transitory phases should also be taken into account, or that dynamical processes, as exchange interactions, play a crucial role in the evolution of MSPs. We also performed a spectroscopic follow-up of the companion to PSRJ1740-5340A in the GC NGC 6397, confirming that it is a deeply peeled star descending from a ~0.8Msun progenitor. This nicely confirms the theoretical expectations about the formation and evolution of MSPs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work focused mainly on two aspects of kinetics of phase separation in binary mixtures. In the first part, we studied the interplay of hydrodynamics and the phase separation of binary mixtures. A considerably flat container (a laterally extended geometry), at an aspect ratio of 14:1 (diameter: height) was chosen, so that any hydrodynamic instabilities, if they arise, could be tracked. Two binary mixtures were studied. One was a mixture of methanol and hexane, doped with 5% ethanol, which phase separated under cooling. The second was a mixture of butoxyethanol and water, doped with 2% decane, which phase separated under heating. The dopants were added to bring down the phase transition temperature around room temperature.rnrnAlthough much work has been done already on classical hydrodynamic instabilities, not much has been done in the understanding of the coupling between phase separation and hydrodynamic instabilities. This work aimed at understanding the influence of phase separation in initiating any hydrodynamic instability, and also vice versa. Another aim was to understand the influence of the applied temperature protocol on the emergence of patterns characteristic to hydrodynamic instabilities. rnrnOn slowly cooling the system continuously, at specific cooling rates, patterns were observed in the first mixture, at the start of phase separation. They resembled the patterns observed in classical Rayleigh-Bénard instability, which arises when a liquid continuously is heated from below. To suppress this classical convection, the cooling setup was tuned such that the lower side of the sample always remained cooler by a few millikelvins, relative to the top. We found that the nature of patterns changed with different cooling rates, with stable patterns appearing for a specific cooling rate (1K/h). On the basis of the cooling protocol, we estimated a modified Rayleigh number for our system. We found that the estimated modified Rayleigh number is near the critical value for instability, for cooling rates between 0.5K/h and 1K/h. This is consistent with our experimental findings. rnrnThe origin of the patterns, in spite of the lower side being relatively colder with respect to the top, points to two possible reasons. 1) During phase separation droplets of either phases are formed, which releases a latent heat. Our microcalorimetry measurements show that the rise in temperature during the first phase separation is in the order of 10-20millikelvins, which in some cases is enough to reverse the applied temperature bias. Thus phase separation in itself initiates a hydrodynamic instability. 2) The second reason comes from the cooling protocol itself. The sample was cooled from above and below. At sufficiently high cooling rates, there are situations where the interior of the sample is relatively hotter than both top and bottom of the sample. This is sufficient to create an instability within the cell. Our experiments at higher cooling rates (5K/h and above) show complex patterns, which hints that there is enough convection even before phase separation occurs. Infact, theoretical work done by Dr.Hayase show that patterns could arise in a system without latent heat, with symmetrical cooling from top and bottom. The simulations also show that the patterns do not span the entire height of the sample cell. This is again consistent with the cell sizes measured in our experiment.rnrnThe second mixture also showed patterns at specific heating rates, when it was continuously heated inducing phase separation. In this case though, the sample was turbid for a long time until patterns appeared. A meniscus was most probably formed before the patterns emerged. We attribute the reason of patterns in this case to Marangoni convection, which is present in systems with an interface, where local differences in surface tension give rise to an instability. Our estimates for the Rayleigh number also show a significantly lower number than that's required for RB-type instability.rnrnIn the first part of the work, therefore, we identify two different kinds of hydrodynamic instabilities in two different mixtures. Both are observed during, or after the first phase separation. Our patterns compare with the classical convection patterns, but here the origins are from phase separation and the cooling protocol.rnrnIn the second part of the work, we focused on the kinetics of phase separation in a polymer solution (polystyrene and methylcyclohexane), which is cooled continuously far down into the two phase region. Oscillations in turbidity, denoting material exchange between the phases are seen. Three processes contribute to the phase separation: Nucleation of droplets, their growth and coalescence, and their subsequent sedimentation. Experiments in low molecular binary mixtures had led to models of oscillation [43] which considered sedimentation time scales much faster than the time scales of nucleation and growth. The size and shape of the sample therefore did not matter in such situations. The oscillations in turbidity were volume-dominated. The present work aimed at understanding the influence of sedimentation time scales for polymer mixtures. Three heights of the sample with same composition were studied side by side. We found that periods increased with the sample height, thus showing that sedimentation time determines the period of oscillations in the polymer solutions. We experimented with different cooling rates and different compositions of the mixture, and we found that periods are still determined by the sample height, and therefore by sedimentation time. rnrnWe also see that turbidity emerges in two ways; either from the interface, or throughout the sample. We suggest that oscillations starting from the interface are due to satellite droplets that are formed on droplet coalescence at the interface. These satellite droplets are then advected to the top of the sample, and they grow, coalesce and sediment. This type of an oscillation wouldn't require the system to pass the energy barrier required for homogenous nucleation throughout the sample. This mechanism would work best in sample where the droplets could be effectively advected throughout the sample. In our experiments, we see more interface dominated oscillations in the smaller cells and lower cooling rates, where droplet advection is favourable. In larger samples and higher cooling rates, we mostly see that the whole sample becomes turbid homogenously, which requires the system to pass the energy barrier for homogenous nucleation.rnrnOscillations, in principle, occur since the system needs to pass an energy barrier for nucleation. The height of the barrier decreases with increasing supersaturation, which in turn is from the temperature ramp applied. This gives rise to a period where the system is clear, in between the turbid periods. At certain specific cooling rates, the system can follow a path such that the start of a turbid period coincides with the vanishing of the last turbid period, thus eliminating the clear periods. This means suppressions of oscillations altogether. In fact we experimentally present a case where, at a certain cooling rate, oscillations indeed vanish. rnrnThus we find through this work that the kinetics of phase separation in polymer solution is different from that of a low molecular system; sedimentation time scales become relevant, and therefore so does the shape and size of the sample. The role of interface in initiating turbid periods also become much more prominent in this system compared to that in low molecular mixtures.rnrnIn summary, some fundamental properties in the kinetics of phase separation in binary mixtures were studied. While the first part of the work described the close interplay of the first phase separation with hydrodynamic instabilities, the second part investigated the nature and determining factors of oscillations, when the system was cooled deep into the two phase region. Both cases show how the geometry of the cell can affect the kinetics of phase separation. This study leads to further fundamental understandings of the factors contributing to the kinetics of phase separation, and to the understandings of what can be controlled and tuned in practical cases. rn

Relevância:

20.00% 20.00%

Publicador: