966 resultados para Time complexity
Resumo:
This study investigates the effects of a short-term pedagogic intervention on the development of L2 fluency among learners studying English for Academic purposes (EAP) at a university in the UK. It also examines the interaction between the development of fluency, and complexity and accuracy. Through a pre-test, post-test design, data were collected over a period of four weeks from learners performing monologic tasks. While the Control Group (CG) focused on developing general speaking and listening skills, the Experimental Group (EG) received awareness-raising activities and fluency strategy training in addition to general speaking and listening practice i.e following the syllabus. The data, coded in terms of a range of measures of fluency, accuracy and complexity, were subjected to repeated measures MANOVA, t-tests and correlations. The results indicate that after the intervention, while some fluency gains were achieved by the CG, the EG produced statistically more fluent language demonstrating a faster speech and articulation rate, longer runs and higher phonation time ratios. The significant correlations obtained between measures of accuracy and learners’ pauses in the CG suggest that pausing opportunities may have been linked to accuracy. The findings of the study have significant implications for L2 pedagogy, highlighting the effective impact of instruction on the development of fluency.
Resumo:
Electrical methods of geophysical survey are known to produce results that are hard to predict at different times of the year, and under differing weather conditions. This is a problem which can lead to misinterpretation of archaeological features under investigation. The dynamic relationship between a ‘natural’ soil matrix and an archaeological feature is a complex one, which greatly affects the success of the feature’s detection when using active electrical methods of geophysical survey. This study has monitored the gradual variation of measured resistivity over a selection of study areas. By targeting difficult to find, and often ‘missing’ electrical anomalies of known archaeological features, this study has increased the understanding of both the detection and interpretation capabilities of such geophysical surveys. A 16 month time-lapse study over 4 archaeological features has taken place to investigate the aforementioned detection problem across different soils and environments. In addition to the commonly used Twin-Probe earth resistance survey, electrical resistivity imaging (ERI) and quadrature electro-magnetic induction (EMI) were also utilised to explore the problem. Statistical analyses have provided a novel interpretation, which has yielded new insights into how the detection of archaeological features is influenced by the relationship between the target feature and the surrounding ‘natural’ soils. The study has highlighted both the complexity and previous misconceptions around the predictability of the electrical methods. The analysis has confirmed that each site provides an individual and nuanced situation, the variation clearly relating to the composition of the soils (particularly pore size) and the local weather history. The wide range of reasons behind survey success at each specific study site has been revealed. The outcomes have shown that a simplistic model of seasonality is not universally applicable to the electrical detection of archaeological features. This has led to the development of a method for quantifying survey success, enabling a deeper understanding of the unique way in which each site is affected by the interaction of local environmental and geological conditions.
Resumo:
With the increase in e-commerce and the digitisation of design data and information,the construction sector has become reliant upon IT infrastructure and systems. The design and production process is more complex, more interconnected, and reliant upon greater information mobility, with seamless exchange of data and information in real time. Construction small and medium-sized enterprises (CSMEs), in particular,the speciality contractors, can effectively utilise cost-effective collaboration-enabling technologies, such as cloud computing, to help in the effective transfer of information and data to improve productivity. The system dynamics (SD) approach offers a perspective and tools to enable a better understanding of the dynamics of complex systems. This research focuses upon system dynamics methodology as a modelling and analysis tool in order to understand and identify the key drivers in the absorption of cloud computing for CSMEs. The aim of this paper is to determine how the use of system dynamics (SD) can improve the management of information flow through collaborative technologies leading to improved productivity. The data supporting the use of system dynamics was obtained through a pilot study consisting of questionnaires and interviews from five CSMEs in the UK house-building sector.
Resumo:
The goal of this work is the efficient solution of the heat equation with Dirichlet or Neumann boundary conditions using the Boundary Elements Method (BEM). Efficiently solving the heat equation is useful, as it is a simple model problem for other types of parabolic problems. In complicated spatial domains as often found in engineering, BEM can be beneficial since only the boundary of the domain has to be discretised. This makes BEM easier than domain methods such as finite elements and finite differences, conventionally combined with time-stepping schemes to solve this problem. The contribution of this work is to further decrease the complexity of solving the heat equation, leading both to speed gains (in CPU time) as well as requiring smaller amounts of memory to solve the same problem. To do this we will combine the complexity gains of boundary reduction by integral equation formulations with a discretisation using wavelet bases. This reduces the total work to O(h
Resumo:
This paper applies the concepts and methods of complex networks to the development of models and simulations of master-slave distributed real-time systems by introducing an upper bound in the allowable delivery time of the packets with computation results. Two representative interconnection models are taken into account: Uniformly random and scale free (Barabasi-Albert), including the presence of background traffic of packets. The obtained results include the identification of the uniformly random interconnectivity scheme as being largely more efficient than the scale-free counterpart. Also, increased latency tolerance of the application provides no help under congestion.
Resumo:
Climate model projections show that climate change will further increase the risk of flooding in many regions of the world. There is a need for climate adaptation, but building new infrastructure or additional retention basins has its limits, especially in densely populated areas where open spaces are limited. Another solution is the more efficient use of the existing infrastructure. This research investigates a method for real-time flood control by means of existing gated weirs and retention basins. The method was tested for the specific study area of the Demer basin in Belgium but is generally applicable. Today, retention basins along the Demer River are controlled by means of adjustable gated weirs based on fixed logic rules. However, because of the high complexity of the system, only suboptimal results are achieved by these rules. By making use of precipitation forecasts and combined hydrological-hydraulic river models, the state of the river network can be predicted. To fasten the calculation speed, a conceptual river model was used. The conceptual model was combined with a Model Predictive Control (MPC) algorithm and a Genetic Algorithm (GA). The MPC algorithm predicts the state of the river network depending on the positions of the adjustable weirs in the basin. The GA generates these positions in a semi-random way. Cost functions, based on water levels, were introduced to evaluate the efficiency of each generation, based on flood damage minimization. In the final phase of this research the influence of the most important MPC and GA parameters was investigated by means of a sensitivity study. The results show that the MPC-GA algorithm manages to reduce the total flood volume during the historical event of September 1998 by 46% in comparison with the current regulation. Based on the MPC-GA results, some recommendations could be formulated to improve the logic rules.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The minority game (MG) model introduced recently provides promising insights into the understanding of the evolution of prices, indices and rates in the financial markets. In this paper we perform a time series analysis of the model employing tools from statistics, dynamical systems theory and stochastic processes. Using benchmark systems and a financial index for comparison, several conclusions are obtained about the generating mechanism for this kind of evolution. The motion is deterministic, driven by occasional random external perturbation. When the interval between two successive perturbations is sufficiently large, one can find low dimensional chaos in this regime. However, the full motion of the MG model is found to be similar to that of the first differences of the SP500 index: stochastic, nonlinear and (unit root) stationary. (C) 2002 Elsevier B.V. B.V. All rights reserved.
Resumo:
Drosophila serido is considered to be a superspecies consisting of two species: D. serido, from Brazil and D. koepferae from Argentina and Bolivia. However this probably does not express the entire evolutionary complexity of its populations. Isofemale lines A95F3 (from Brazil) and B20D2 (from Argentina), at present representing, respectively, the first and second species, were analyzed for fertility and fecundity in pair-mating intracrosses and intercrosses, as well as for development time, banding patterns and asynapsis of polytene chromosomes in the isofemale lines and their hybrids.Although variations in experimental conditions resulted in some variability in the results, in general A95F3 fertility and fecundity were lower than in B20D2. Intercrosses of A95F3 females and B20D2 males showed lower fertility and fecundity than the reciprocal crosses, following more closely characteristics of the mother strains. This is in contrast to the results obtained by Fontdevilla et al. (An. Entomol. Soc. Amer. 81: 380-385, 1988) and may be due to the different geographic origin of D. serido strains they used in crosses to B20D2. This difference and others cited in the literature relative to aedeagus morphology, karyotype characteristics, inversion polymorphisms and reproductive isolation strongly indicate that A95F3 and D. serido from the State of Bahia, Brazil are not a single evolutionary entity, reinforcing the idea of greater complexity of the superspecies D. serido than is known today.The reproductive isolation mechanisms found operating between A95F3 and B20D2 were prezygotic and postzygotic, the latter included mortality at the larvae stage in both directions of crosses and sterility of male hybrids in intercrosses involving B20D2 females and A95F3 males. The two isofemale lines differed in egg-adult development time, which was also differently affected by culture medium composition.A95F3 and B20D2 also showed differences in the banding patterns of proximal regions of polytene chromosomes 2, 3 and X, a fixed inversion in chromosome 3 (here named 3t), apparently not described previously, and a high degree of asynapsis in hybrids.These observations, especially those related to reproductive isolation and chromosomal differentiation (including the karyotype, previously described, and the differentiation of banding patterns, described in this paper), as well as the extensive asynapsis observed in hybrids reinforces the distinct species status of A95F3 and B20D2 isofemale lines.
Resumo:
The structural complexity of the nitrogen source strongly affects both biomass and ethanol production by industrial strains of Saccharomyces cerevisiae, during fermentation in media containing glucose or maltose, and supplemented with a nitrogen source varying from a single ammonium salt (ammonium sulfate) to free amino acids (casamino acids) and peptides (peptone). Diauxie was observed at low glucose and maltose concentrations independent of nitrogen supplementation. At high sugar concentrations diauxie was not easily observed. and growth and ethanol production depended on the nature of the nitrogen source. This was different for baking and brewing ale and lager yeast strains. Sugar concentration had a strong effect on the shift from oxido-fermentative to oxidative metabolism. At low sugar concentrations, biomass production was similar under both peptone and casamino acid supplementation. Under casamino acid supplementation, the time for metabolic shift increased with the glucose concentration, together with a decrease in the biomass production. This drastic effect on glucose fermentation resulted in the extinction of the second growth phase, probably due to the loss of cell viability. Ammonium salts always induced poor yeast performance. In general, supplementation with a nitrogen source in the peptide form (peptone) was more positive for yeast metabolism, inducing higher biomass and ethanol production, and preserving yeast viability, in both glucose and maltose media, for baking and brewing ale and lager yeast strains. Determination of amino acid utilization showed that most free and peptide amino acids present, in peptone and casamino acids, were utilized by the yeast, suggesting that the results described in this work were not due to a nutritional status induced by nitrogen limitation.
Resumo:
In the last decade, distributed generation, with its various technologies, has increased its presence in the energy mix presenting distribution networks with challenges in terms of evaluating the technical impacts that require a wide range of network operational effects to be qualified and quantified. The inherent time-varying behavior of demand and distributed generation (particularly when renewable sources are used), need to be taken into account since considering critical scenarios of loading and generation may mask the impacts. One means of dealing with such complexity is through the use of indices that indicate the benefit or otherwise of connections at a given location and for a given horizon. This paper presents a multiobjective performance index for distribution networks with time-varying distributed generation which consider a number of technical issues. The approach has been applied to a medium voltage distribution network considering hourly demand and wind speeds. Results show that this proposal has a better response to the natural behavior of loads and generation than solely considering a single operation scenario.
A new method for real time computation of power quality indices based on instantaneous space phasors
Resumo:
One of the important issues about using renewable energy is the integration of dispersed generation in the distribution networks. Previous experience has shown that the integration of dispersed generation can improve voltage profile in the network, decrease loss etc. but can create safety and technical problems as well, This work report the application of the instantaneous space phasors and the instantaneous complex power in observing performances of the distribution networks with dispersed generators in steady state. New IEEE apparent power definition, the so called Buccholz-Goodhue apparent power, as well as new proposed power quality (oscillation) index in the three-phase distribution systems with unbalanced loads and dispersed generators, are applied. Results obtained from several case studies using IEEE 34 nodes test network are presented and discussed.
Resumo:
An algorithm for real-time and onboard orbit determination applying the Extended Kalman Filter (EKF) method is developed. Aiming at a very simple and still fairly accurate orbit determination, an analysis is performed to ascertain an adequacy of modeling complexity versus accuracy. The minimum set of to-be-estimated states to reach the level of accuracy of tens of meters is found to have at least the position, velocity, and user clock offset components. The dynamical model is assessed through several tests, covering force model, numerical integration scheme and step size, and simplified variational equations. The measurement model includes only relevant effects to the order of meters. The EKF method is chosen to be the simplest real-time estimation algorithm with adequate tuning of its parameters. In the developed procedure, the obtained position and velocity errors along a day vary from 15 to 20 m and from 0.014 to 0.018 m/s, respectively, with standard deviation from 6 to 10 m and from 0.006 to 0.008 m/s, respectively, with the SA either on or off. The results, as well as analysis of the final adopted models used, are presented in this work. © 2013 Ana Paula Marins Chiaradia et al.