977 resultados para ORDER OF REASONS
Resumo:
Une des façons d’approcher la question de l’existence de raisons partiales non-dérivatives d’une quelconque sorte consiste à expliquer ce que sont les raisons partiales et ensuite à chercher à savoir s’il y a des raisons de cette sorte. Si de telles raisons existent, alors il est au moins possible qu’il y ait des raisons partiales d’amitié. C’est cette approche que j’adopterai ici, et elle produit des résultats intéressants. Le premier a trait à la structure des raisons partiales. C’est au moins une condition nécessaire pour qu’une raison soit partiale qu’elle aie une composante relationnelle explicite. Cette composante, techniquement parlant, est un relatum dans la relation d’être une raison qui elle-même est une relation entre la personne à qui la raison s’applique et la personne concernée par l’action pour laquelle il y a une raison. La deuxième conclusion de ce texte est que cette composante relationnelle est aussi requise dans de nombreuses sortes de raisons admises comme impartiales. Afin d’éviter de banaliser la distinction entre raisons partiales et impartiales nous devons appliquer une condition suffisante additionnelle. Finalement, bien qu’il pourrait s’avérer possible de distinguer les raisons impartiales ayant une composante relationnelle des raisons partiales, cette approche suggère que la question de savoir si l’éthique est partiale ou impartiale devra se régler au niveau de l’éthique normative, ou à tout le moins, qu’elle ne pourra se régler au niveau du discours sur la nature des raisons d’agir.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
The structural, electronic and magnetic properties of one-dimensional 3d transition-metal (TM) monoatomic chains having linear, zigzag and ladder geometries are investigated in the frame-work of first-principles density-functional theory. The stability of long-range magnetic order along the nanowires is determined by computing the corresponding frozen-magnon dispersion relations as a function of the 'spin-wave' vector q. First, we show that the ground-state magnetic orders of V, Mn and Fe linear chains at the equilibrium interatomic distances are non-collinear (NC) spin-density waves (SDWs) with characteristic equilibrium wave vectors q that depend on the composition and interatomic distance. The electronic and magnetic properties of these novel spin-spiral structures are discussed from a local perspective by analyzing the spin-polarized electronic densities of states, the local magnetic moments and the spin-density distributions for representative values q. Second, we investigate the stability of NC spin arrangements in Fe zigzag chains and ladders. We find that the non-collinear SDWs are remarkably stable in the biatomic chains (square ladder), whereas ferromagnetic order (q =0) dominates in zigzag chains (triangular ladders). The different magnetic structures are interpreted in terms of the corresponding effective exchange interactions J(ij) between the local magnetic moments μ(i) and μ(j) at atoms i and j. The effective couplings are derived by fitting a classical Heisenberg model to the ab initio magnon dispersion relations. In addition they are analyzed in the framework of general magnetic phase diagrams having arbitrary first, second, and third nearest-neighbor (NN) interactions J(ij). The effect of external electric fields (EFs) on the stability of NC magnetic order has been quantified for representative monoatomic free-standing and deposited chains. We find that an external EF, which is applied perpendicular to the chains, favors non-collinear order in V chains, whereas it stabilizes the ferromagnetic (FM) order in Fe chains. Moreover, our calculations reveal a change in the magnetic order of V chains deposited on the Cu(110) surface in the presence of external EFs. In this case the NC spiral order, which was unstable in the absence of EF, becomes the most favorable one when perpendicular fields of the order of 0.1 V/Å are applied. As a final application of the theory we study the magnetic interactions within monoatomic TM chains deposited on graphene sheets. One observes that even weak chain substrate hybridizations can modify the magnetic order. Mn and Fe chains show incommensurable NC spin configurations. Remarkably, V chains show a transition from a spiral magnetic order in the freestanding geometry to FM order when they are deposited on a graphene sheet. Some TM-terminated zigzag graphene-nanoribbons, for example V and Fe terminated nanoribbons, also show NC spin configurations. Finally, the magnetic anisotropy energies (MAEs) of TM chains on graphene are investigated. It is shown that Co and Fe chains exhibit significant MAEs and orbital magnetic moments with in-plane easy magnetization axis. The remarkable changes in the magnetic properties of chains on graphene are correlated to charge transfers from the TMs to NN carbon atoms. Goals and limitations of this study and the resulting perspectives of future investigations are discussed.
Resumo:
The challenge of reducing carbon emission and achieving emission target until 2050, has become a key development strategy of energy distribution for each country. The automotive industries, as the important portion of implementing energy requirements, are making some related researches to meet energy requirements and customer requirements. For modern energy requirements, it should be clean, green and renewable. For customer requirements, it should be economic, reliable and long life time. Regarding increasing requirements on the market and enlarged customer quantity, EVs and PHEV are more and more important for automotive manufactures. Normally for EVs and PHEV there are two important key parts, which are battery package and power electronics composing of critical components. A rechargeable battery is a quite important element for achieving cost competitiveness, which is mainly used to story energy and provide continue energy to drive an electric motor. In order to recharge battery and drive the electric motor, power electronics group is an essential bridge to convert different energy types for both of them. In modern power electronics there are many different topologies such as non-isolated and isolated power converters which can be used to implement for charging battery. One of most used converter topology is multiphase interleaved power converter, pri- marily due to its prominent advantages, which is frequently employed to obtain optimal dynamic response, high effciency and compact converter size. Concerning its usage, many detailed investigations regarding topology, control strategy and devices have been done. In this thesis, the core research is to investigate some branched contents in term of issues analysis and optimization approaches of building magnetic component. This work starts with an introduction of reasons of developing EVs and PEHV and an overview of different possible topologies regarding specific application requirements. Because of less components, high reliability, high effciency and also no special safety requirement, non-isolated multiphase interleaved converter is selected as the basic research topology of founded W-charge project for investigating its advantages and potential branches on using optimized magnetic components. Following, all those proposed aspects and approaches are investigated and analyzed in details in order to verify constrains and advantages through using integrated coupled inductors. Furthermore, digital controller concept and a novel tapped-inductor topology is proposed for multiphase power converter and electric vehicle application.
Resumo:
Recent developments in the UK concerning the reception of Digital Terrestrial Television (DTT) have indicated that, as it currently stands, DVB-T receivers may not be sufficient to maintain adequate quality of digital picture information to the consumer. There are many possible reasons why such large errors are being introduced into the system preventing reception failure. It has been suggested that one possibility is that the assumptions concerning the immunity to multipath that Coded Orthogonal Frequency Division Multiplex (COFDM) is expected to have, may not be entirely accurate. Previous research has shown that multipath can indeed have an impact on a DVB-T receiver performance. In the UK, proposals have been made to change the modulation from 64-QAM to 16-QAM to improve the immunity to multipath, but this paper demonstrates that the 16-QAM performance may again not be sufficient. To this end, this paper presents a deterministic approach to equalization such that a 64-QAM receiver with the simple equalizer presented in this paper has the same order of MPEG-2 BER performance as that to a 16-QAM receiver without equalization. Thus, alleviating the requirement in the broadcasters to migrate from 64-QAM to 16-QAM Of course, by adding the equalizer to a 16-QAM receiver then the BER is also further improved and thus creating one more step to satisfying the consumers(1).
Resumo:
Films of isotropic nanocrystalline Pd(80)Co(20) alloys were obtained by electrodeposition onto brass substrate in plating baths maintained at different pH values. Increasing the pH of the plating bath led to an increase in mean grain size without inducing significant changes in the composition of the alloy. The magnetocrystalline anisotropy constant was estimated and the value was of the same order of magnitude as that reported for samples with perpendicular magnetic anisotropy. First order reversal curve (FORC) analysis revealed the presence of an important component of reversible magnetization. Also, FORC diagrams obtained at different sweep rate of the applied magnetic field, revealed that this reversible component is strongly affected by kinetic effect. The slight bias observed in the irreversible part of the FORC distribution suggested the dominance of magnetizing intergrain exchange coupling over demagnetizing dipolar interactions and microstructural disorder. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
Ionic liquids, ILs, carrying long-chain alkyl groups are surface active, SAIIs. We investigated the micellar properties of the SAIL 1-hexadecyl-3-methylimidazolium chloride, C(16)MeImCl, and compared the data with 1-hexadecylpyridinium chloride, C(16)PYCl, and benzyl (3-hexadecanoylaminoethyl)dimethylammonium chloride, C(15)AEtBzMe(2)Cl. The properties compared include critical micelle concentration, cmc; thermodynamic parameters of micellization; empirical polarity and water concentrations in the interfacial regions. In the temperature range from 15 to 75 degrees C, the order of cmc in H(2)O and in D(2)O is C(16)PYCl > C(16)MeImCl > C(15)AEtBzMe(2)Cl. The enthalpies of micellization, Delta H(mic)(degrees), were calculated indirectly from by use of the van`t Hoff treatment; directly by isothermal titration calorimetry, ITC. Calculation of the degree of counter-ion dissociation, alpha(mic), from conductivity measurements, by use of Evans equation requires knowledge of the aggregation numbers, N(agg), at different temperatures. We have introduced a reliable method for carrying out this calculation, based on the volume and length of the monomer, and the dependence of N(agg) on temperature. The N(agg) calculated for C(16)PyCl and C(16)MeImCl were corroborated by light scattering measurements. Conductivity- and ITC-based Delta H(mic)(degrees) do not agree; reasons for this discrepancy are discussed. Micelle formation is entropy driven: at all studied temperatures for C(16)MeImCl; only up to 65 degrees C for C(16)PyCl; and up to 55 degrees C for C(15)AEtBzMe(2)Cl. All these data can be rationalized by considering hydrogen-bonding between the head-ions of the monomers in the micellar aggregate. The empirical polarities and concentrations of interfacial water were found to be independent of the nature of the head-group. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Trabalho apresentado no Congresso Nacional de Matemática Aplicada à Indústria, 18 a 21 de novembro de 2014, Caldas Novas - Goiás
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Objectives: This text presents an anatomical study of the normal bony orbital structure of a sample of different bird species belonging to the order Psittaciformes.Procedures: the bony anatomy of Psittaciformes' skulls was examined and described using cadavers of birds that were presented already dead to the Federal University of Parana, Brazil or had been euthanized for humane reasons. Dissections of the orbital cavity were performed under 2-4 x magnification, and descriptions of the orbital bones were made from observations of macerated skulls that had been boiled and cleaned. The present paper discusses the main features of the bony orbit of psittaciform birds, describing known anatomical information but also bringing new information, mainly concerning species differences that might help not only veterinary anatomists but also zoologists, clinicians, researchers, and students of veterinary ophthalmology to better comprehend this order of birds.Results and conclusions: Variations in the anatomic conformation of the bony elements of the orbit were observed in different species of Psittaciformes. Based on these differences, Psittaciformes were classified into two different groups. The first group of Psittaciformes shows an enclosed (complete) bony orbit formed by the junction of the orbital with the postorbital processes, creating a suborbital arch. The second group of Psittaciformes essentially lacked a suborbital arch, presenting an open (incomplete) bony orbit, typical of most modern birds. In the latter group, orbital and postorbital processes are present.
Resumo:
Includes bibliography
Resumo:
The purpose of this study was to investigate the influence of exercise order on one-repetition maximum (1-RM) and ten-repetition maximum (10-RM) strength gains after 6 weeks of resistance training (RT) in trained men. Sixteen men were randomly assigned into two groups based on the order of exercises performed during training sessions: a group that performed large muscle group exercises first and progressed to small muscle group exercises (LG-SM); while a second group performed the opposite sequence and started with small muscle group exercises and progressed to large muscle group exercises (SM-LG). Four sessions of RT were conducted per week; all exercises were performed for three sets of 8-12 repetitions with 1-min rest intervals between sets. Maximal and submaximal strength were assessed at baseline and after 6 weeks of RT with 1-RM and 10-RM testing for the bench press (BP), lat pulldown (LPD), triceps pulley extension (TE) and biceps curl (BC), respectively. Two-way ANOVA for the 1-RM and 10-RM tests indicated a significant group x time interaction. The 1-RM values significantly increased for all exercises in both groups (P<0.05), but were not significantly different between groups. However, effect size (ES) data indicated that the LG-SM group exhibited a greater magnitude of gains (1-RM and 10-RM) for the BP and LPD exercises. Conversely, ES indicated that the SM-LG group exhibited a greater magnitude of gains (1-RM and 10-RM) for the TE and BC exercises. In conclusion, the results suggest that upper body movements should be prioritized and performed according to individual needs to maximize maximal and submaximal strength. © 2013 Scandinavian Society of Clinical Physiology and Nuclear Medicine.
Resumo:
Supported by the Functional Discourse Grammar theoretical model, as proposed by Hengeveld (2005), this paper aims to show that the order of modifiers of the Representational Level in spoken Brazilian Portuguese is determined by scope relations according to the layers of property, state-of-affairs and propositional content. This kind of distribution indicates that, far from being free-ordered as suggested by traditional grammarians, modifiers have a preferred position determined by semantic relations that may be only changed for pragmatic and structural reasons.
Resumo:
In Performance-Based Earthquake Engineering (PBEE), evaluating the seismic performance (or seismic risk) of a structure at a designed site has gained major attention, especially in the past decade. One of the objectives in PBEE is to quantify the seismic reliability of a structure (due to the future random earthquakes) at a site. For that purpose, Probabilistic Seismic Demand Analysis (PSDA) is utilized as a tool to estimate the Mean Annual Frequency (MAF) of exceeding a specified value of a structural Engineering Demand Parameter (EDP). This dissertation focuses mainly on applying an average of a certain number of spectral acceleration ordinates in a certain interval of periods, Sa,avg (T1,…,Tn), as scalar ground motion Intensity Measure (IM) when assessing the seismic performance of inelastic structures. Since the interval of periods where computing Sa,avg is related to the more or less influence of higher vibration modes on the inelastic response, it is appropriate to speak about improved IMs. The results using these improved IMs are compared with a conventional elastic-based scalar IMs (e.g., pseudo spectral acceleration, Sa ( T(¹)), or peak ground acceleration, PGA) and the advanced inelastic-based scalar IM (i.e., inelastic spectral displacement, Sdi). The advantages of applying improved IMs are: (i ) "computability" of the seismic hazard according to traditional Probabilistic Seismic Hazard Analysis (PSHA), because ground motion prediction models are already available for Sa (Ti), and hence it is possibile to employ existing models to assess hazard in terms of Sa,avg, and (ii ) "efficiency" or smaller variability of structural response, which was minimized to assess the optimal range to compute Sa,avg. More work is needed to assess also "sufficiency" and "scaling robustness" desirable properties, which are disregarded in this dissertation. However, for ordinary records (i.e., with no pulse like effects), using the improved IMs is found to be more accurate than using the elastic- and inelastic-based IMs. For structural demands that are dominated by the first mode of vibration, using Sa,avg can be negligible relative to the conventionally-used Sa (T(¹)) and the advanced Sdi. For structural demands with sign.cant higher-mode contribution, an improved scalar IM that incorporates higher modes needs to be utilized. In order to fully understand the influence of the IM on the seismis risk, a simplified closed-form expression for the probability of exceeding a limit state capacity was chosen as a reliability measure under seismic excitations and implemented for Reinforced Concrete (RC) frame structures. This closed-form expression is partuclarly useful for seismic assessment and design of structures, taking into account the uncertainty in the generic variables, structural "demand" and "capacity" as well as the uncertainty in seismic excitations. The assumed framework employs nonlinear Incremental Dynamic Analysis (IDA) procedures in order to estimate variability in the response of the structure (demand) to seismic excitations, conditioned to IM. The estimation of the seismic risk using the simplified closed-form expression is affected by IM, because the final seismic risk is not constant, but with the same order of magnitude. Possible reasons concern the non-linear model assumed, or the insufficiency of the selected IM. Since it is impossibile to state what is the "real" probability of exceeding a limit state looking the total risk, the only way is represented by the optimization of the desirable properties of an IM.
Resumo:
This work focused mainly on two aspects of kinetics of phase separation in binary mixtures. In the first part, we studied the interplay of hydrodynamics and the phase separation of binary mixtures. A considerably flat container (a laterally extended geometry), at an aspect ratio of 14:1 (diameter: height) was chosen, so that any hydrodynamic instabilities, if they arise, could be tracked. Two binary mixtures were studied. One was a mixture of methanol and hexane, doped with 5% ethanol, which phase separated under cooling. The second was a mixture of butoxyethanol and water, doped with 2% decane, which phase separated under heating. The dopants were added to bring down the phase transition temperature around room temperature.rnrnAlthough much work has been done already on classical hydrodynamic instabilities, not much has been done in the understanding of the coupling between phase separation and hydrodynamic instabilities. This work aimed at understanding the influence of phase separation in initiating any hydrodynamic instability, and also vice versa. Another aim was to understand the influence of the applied temperature protocol on the emergence of patterns characteristic to hydrodynamic instabilities. rnrnOn slowly cooling the system continuously, at specific cooling rates, patterns were observed in the first mixture, at the start of phase separation. They resembled the patterns observed in classical Rayleigh-Bénard instability, which arises when a liquid continuously is heated from below. To suppress this classical convection, the cooling setup was tuned such that the lower side of the sample always remained cooler by a few millikelvins, relative to the top. We found that the nature of patterns changed with different cooling rates, with stable patterns appearing for a specific cooling rate (1K/h). On the basis of the cooling protocol, we estimated a modified Rayleigh number for our system. We found that the estimated modified Rayleigh number is near the critical value for instability, for cooling rates between 0.5K/h and 1K/h. This is consistent with our experimental findings. rnrnThe origin of the patterns, in spite of the lower side being relatively colder with respect to the top, points to two possible reasons. 1) During phase separation droplets of either phases are formed, which releases a latent heat. Our microcalorimetry measurements show that the rise in temperature during the first phase separation is in the order of 10-20millikelvins, which in some cases is enough to reverse the applied temperature bias. Thus phase separation in itself initiates a hydrodynamic instability. 2) The second reason comes from the cooling protocol itself. The sample was cooled from above and below. At sufficiently high cooling rates, there are situations where the interior of the sample is relatively hotter than both top and bottom of the sample. This is sufficient to create an instability within the cell. Our experiments at higher cooling rates (5K/h and above) show complex patterns, which hints that there is enough convection even before phase separation occurs. Infact, theoretical work done by Dr.Hayase show that patterns could arise in a system without latent heat, with symmetrical cooling from top and bottom. The simulations also show that the patterns do not span the entire height of the sample cell. This is again consistent with the cell sizes measured in our experiment.rnrnThe second mixture also showed patterns at specific heating rates, when it was continuously heated inducing phase separation. In this case though, the sample was turbid for a long time until patterns appeared. A meniscus was most probably formed before the patterns emerged. We attribute the reason of patterns in this case to Marangoni convection, which is present in systems with an interface, where local differences in surface tension give rise to an instability. Our estimates for the Rayleigh number also show a significantly lower number than that's required for RB-type instability.rnrnIn the first part of the work, therefore, we identify two different kinds of hydrodynamic instabilities in two different mixtures. Both are observed during, or after the first phase separation. Our patterns compare with the classical convection patterns, but here the origins are from phase separation and the cooling protocol.rnrnIn the second part of the work, we focused on the kinetics of phase separation in a polymer solution (polystyrene and methylcyclohexane), which is cooled continuously far down into the two phase region. Oscillations in turbidity, denoting material exchange between the phases are seen. Three processes contribute to the phase separation: Nucleation of droplets, their growth and coalescence, and their subsequent sedimentation. Experiments in low molecular binary mixtures had led to models of oscillation [43] which considered sedimentation time scales much faster than the time scales of nucleation and growth. The size and shape of the sample therefore did not matter in such situations. The oscillations in turbidity were volume-dominated. The present work aimed at understanding the influence of sedimentation time scales for polymer mixtures. Three heights of the sample with same composition were studied side by side. We found that periods increased with the sample height, thus showing that sedimentation time determines the period of oscillations in the polymer solutions. We experimented with different cooling rates and different compositions of the mixture, and we found that periods are still determined by the sample height, and therefore by sedimentation time. rnrnWe also see that turbidity emerges in two ways; either from the interface, or throughout the sample. We suggest that oscillations starting from the interface are due to satellite droplets that are formed on droplet coalescence at the interface. These satellite droplets are then advected to the top of the sample, and they grow, coalesce and sediment. This type of an oscillation wouldn't require the system to pass the energy barrier required for homogenous nucleation throughout the sample. This mechanism would work best in sample where the droplets could be effectively advected throughout the sample. In our experiments, we see more interface dominated oscillations in the smaller cells and lower cooling rates, where droplet advection is favourable. In larger samples and higher cooling rates, we mostly see that the whole sample becomes turbid homogenously, which requires the system to pass the energy barrier for homogenous nucleation.rnrnOscillations, in principle, occur since the system needs to pass an energy barrier for nucleation. The height of the barrier decreases with increasing supersaturation, which in turn is from the temperature ramp applied. This gives rise to a period where the system is clear, in between the turbid periods. At certain specific cooling rates, the system can follow a path such that the start of a turbid period coincides with the vanishing of the last turbid period, thus eliminating the clear periods. This means suppressions of oscillations altogether. In fact we experimentally present a case where, at a certain cooling rate, oscillations indeed vanish. rnrnThus we find through this work that the kinetics of phase separation in polymer solution is different from that of a low molecular system; sedimentation time scales become relevant, and therefore so does the shape and size of the sample. The role of interface in initiating turbid periods also become much more prominent in this system compared to that in low molecular mixtures.rnrnIn summary, some fundamental properties in the kinetics of phase separation in binary mixtures were studied. While the first part of the work described the close interplay of the first phase separation with hydrodynamic instabilities, the second part investigated the nature and determining factors of oscillations, when the system was cooled deep into the two phase region. Both cases show how the geometry of the cell can affect the kinetics of phase separation. This study leads to further fundamental understandings of the factors contributing to the kinetics of phase separation, and to the understandings of what can be controlled and tuned in practical cases. rn