925 resultados para Dimensional analysis
Resumo:
Poverty is a multi-dimensional socio-economic problem in most sub-Saharan African countries. The purpose of this study is to analyse the relationship between household size and poverty in low-income communities. The Northern Free State region in South Africa was selected as the study region. A sample of approximately 2 900 households was randomly selected within 12 poor communities in the region. A poverty line was calculated and 74% of all households were found to live below the poverty line. The Pearson’s chi-square test indicated a positive relationship between household size and poverty in eleven of the twelve low-income communities. Households below the poverty line presented larger households than those households above the poverty line. This finding is in contradiction with some findings in other African countries due to the fact that South Africa has higher levels of modernisation with less access to land for subsistence farming. Effective provision of basic needs, community facilities and access to assets such as land could assist poor households with better quality of life. Poor households also need to be granted access to economic opportunities, while also receiving adult education regarding financial management and reproductive health.
Resumo:
Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.
Resumo:
We consider a two-dimensional Fermi-Pasta-Ulam (FPU) lattice with hexagonal symmetry. Using asymptotic methods based on small amplitude ansatz, at third order we obtain a eduction to a cubic nonlinear Schr{\"o}dinger equation (NLS) for the breather envelope. However, this does not support stable soliton solutions, so we pursue a higher-order analysis yielding a generalised NLS, which includes known stabilising terms. We present numerical results which suggest that long-lived stationary and moving breathers are supported by the lattice. We find breather solutions which move in an arbitrary direction, an ellipticity criterion for the wavenumbers of the carrier wave, symptotic estimates for the breather energy, and a minimum threshold energy below which breathers cannot be found. This energy threshold is maximised for stationary breathers, and becomes vanishingly small near the boundary of the elliptic domain where breathers attain a maximum speed. Several of the results obtained are similar to those obtained for the square FPU lattice (Butt \& Wattis, {\em J Phys A}, {\bf 39}, 4955, (2006)), though we find that the square and hexagonal lattices exhibit different properties in regard to the generation of harmonics, and the isotropy of the generalised NLS equation.
Resumo:
We find approximations to travelling breather solutions of the one-dimensional Fermi-Pasta-Ulam (FPU) lattice. Both bright breather and dark breather solutions are found. We find that the existence of localised (bright) solutions depends upon the coefficients of cubic and quartic terms of the potential energy, generalising an earlier inequality derived by James [CR Acad Sci Paris 332, 581, (2001)]. We use the method of multiple scales to reduce the equations of motion for the lattice to a nonlinear Schr{\"o}dinger equation at leading order and hence construct an asymptotic form for the breather. We show that in the absence of a cubic potential energy term, the lattice supports combined breathing-kink waveforms. The amplitude of breathing-kinks can be arbitrarily small, as opposed to traditional monotone kinks, which have a nonzero minimum amplitude in such systems. We also present numerical simulations of the lattice, verifying the shape and velocity of the travelling waveforms, and confirming the long-lived nature of all such modes.
Resumo:
Using asymptotic methods, we investigate whether discrete breathers are supported by a two-dimensional Fermi-Pasta-Ulam lattice. A scalar (one-component) two-dimensional Fermi-Pasta-Ulam lattice is shown to model the charge stored within an electrical transmission lattice. A third-order multiple-scale analysis in the semi-discrete limit fails, since at this order, the lattice equations reduce to the (2+1)-dimensional cubic nonlinear Schrödinger (NLS) equation which does not support stable soliton solutions for the breather envelope. We therefore extend the analysis to higher order and find a generalised $(2+1)$-dimensional NLS equation which incorporates higher order dispersive and nonlinear terms as perturbations. We find an ellipticity criterion for the wave numbers of the carrier wave. Numerical simulations suggest that both stationary and moving breathers are supported by the system. Calculations of the energy show the expected threshold behaviour whereby the energy of breathers does {\em not} go to zero with the amplitude; we find that the energy threshold is maximised by stationary breathers, and becomes arbitrarily small as the boundary of the domain of ellipticity is approached.
Resumo:
Before the rise of the Multidimentional Protein Identification Technology (MudPIT), protein and peptide mixtures were resolved using traditional proteomic technologies like the gel-‐ based 2D chromatography that separates proteins by isoelectric point and molecular weight. This technique was tedious and limited, since the characterization of single proteins required isolation of protein gel spots, their subsequent proteolyzation and analysis using Matrix-‐ assisted laser desorption/ionization-‐time of flight (MALDI-‐TOF) mass spectrometry.
Resumo:
Rhizobium freirei PRF 81 is employed in common bean commercial inoculants in Brazil, due to its outstanding efficiency in fixing nitrogen, competitiveness and tolerance to abiotic stresses. Among the environmental conditions faced by rhizobia in soils, acidity is perhaps the encountered most, especially in Brazil. So, we used proteomics based approaches to study the responses of PRF 81 to a low pH condition. R. freirei PRF 81 was grown in TY medium until exponential phase in two treatments: pH 6,8 and pH 4,8. Whole-cell proteins were extracted and separated by two-dimensional gel electrophoresis, using IPG-strips with pH range 4-7 and 12% polyacrilamide gels. The experiment was performed in triplicate. Protein spots were detected in the high-resolution digitized gel images and analyzed by Image Master 2D Platinum v 5.0 software. Relative volumes (%vol) of compared between the two conditions tested and were statistically evaluated (p ≤ 0.05). Even knowing that R. freirei PRF 81 can still grow in more acid conditions, pH 4.8 was chosen because didn´t affect significantly the bacterial growth kinetics, a factor that could compromise the analysis. Using a narrow pH range, the gel profiles displayed a better resolution and reprodutibility than using broader pH range. Spots were mostly concentrated between pH 5-7 and molecular masses between 17-95 kDa. From the six hundred well-defined spots analyzed, one hundred and sixty-three spots presented a significant change in % vol, indicating that the pH led to expressive changes in the proteome of R. freirei PRF 81. Of these, sixty-one were up-regulated and one hundred two was downregulated in pH 4.8 condition. Also, fourteen spots were only identified in the acid condition, while seven spots was exclusively detected in pH 6.8. Ninety-five differentially expressed spots and two exclusively detected in pH 4,8 were selected for Maldi-Tof identification. Together with the genome sequencing and the proteome analysis of heat stress, we will search for molecular determinants of PRF 81 related to capacity to adapt to stressful tropical conditions.
Resumo:
We explore the recently developed snapshot-based dynamic mode decomposition (DMD) technique, a matrix-free Arnoldi type method, to predict 3D linear global flow instabilities. We apply the DMD technique to flows confined in an L-shaped cavity and compare the resulting modes to their counterparts issued from classic, matrix forming, linear instability analysis (i.e. BiGlobal approach) and direct numerical simulations. Results show that the DMD technique, which uses snapshots generated by a 3D non-linear incompressible discontinuous Galerkin Navier?Stokes solver, provides very similar results to classical linear instability analysis techniques. In addition, we compare DMD results issued from non-linear and linearised Navier?Stokes solvers, showing that linearisation is not necessary (i.e. base flow not required) to obtain linear modes, as long as the analysis is restricted to the exponential growth regime, that is, flow regime governed by the linearised Navier?Stokes equations, and showing the potential of this type of analysis based on snapshots to general purpose CFD codes, without need of modifications. Finally, this work shows that the DMD technique can provide three-dimensional direct and adjoint modes through snapshots provided by the linearised and adjoint linearised Navier?Stokes equations advanced in time. Subsequently, these modes are used to provide structural sensitivity maps and sensitivity to base flow modification information for 3D flows and complex geometries, at an affordable computational cost. The information provided by the sensitivity study is used to modify the L-shaped geometry and control the most unstable 3D mode.
Resumo:
116 p.
Resumo:
Different types of sentences express sentiment in very different ways. Traditional sentence-level sentiment classification research focuses on one-technique-fits-all solution or only centers on one special type of sentences. In this paper, we propose a divide-and-conquer approach which first classifies sentences into different types, then performs sentiment analysis separately on sentences from each type. Specifically, we find that sentences tend to be more complex if they contain more sentiment targets. Thus, we propose to first apply a neural network based sequence model to classify opinionated sentences into three types according to the number of targets appeared in a sentence. Each group of sentences is then fed into a one-dimensional convolutional neural network separately for sentiment classification. Our approach has been evaluated on four sentiment classification datasets and compared with a wide range of baselines. Experimental results show that: (1) sentence type classification can improve the performance of sentence-level sentiment analysis; (2) the proposed approach achieves state-of-the-art results on several benchmarking datasets.
Resumo:
The wave energy industry is entering a new phase of pre-commercial and commercial deployments of full-scale devices, so better understanding of seaway variability is critical to the successful operation of devices. The response of Wave Energy Converters to incident waves govern their operational performance and for many devices, this is highly dependent on spectral shape due to their resonant properties. Various methods of wave measurement are presented, along with analysis techniques and empirical models. Resource assessments, device performance predictions and monitoring of operational devices will often be based on summary statistics and assume a standard spectral shape such as Pierson-Moskowitz or JONSWAP. Furthermore, these are typically derived from the closest available wave data, frequently separated from the site on scales in the order of 1km. Therefore, variability of seaways from standard spectral shapes and spatial inconsistency between the measurement point and the device site will cause inaccuracies in the performance assessment. This thesis categorises time and frequency domain analysis techniques that can be used to identify changes in a sea state from record to record. Device specific issues such as dimensional scaling of sea states and power output are discussed along with potential differences that arise in estimated and actual output power of a WEC due to spectral shape variation. This is investigated using measured data from various phases of device development.
Resumo:
This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.
Resumo:
3D film’s explicit new space depth arguably provides both an enhanced realistic quality to the image and a wealth of more acute visual and haptic sensations (a ‘montage of attractions’) to the increasingly involved spectator. But David Cronenberg’s related ironic remark that ‘cinema as such is from the outset a «special effect»’ should warn us against the geometrical naiveté of such assumptions, within a Cartesian ocularcentric tradition for long overcome by Merleau-Ponty’s embodiment of perception and Deleuze’s notion of the self-consistency of the artistic sensation and space. Indeed, ‘2D’ traditional cinema already provides the accomplished «fourth wall effect», enclosing the beholder behind his back within a space that no longer belongs to the screen (nor to ‘reality’) as such, and therefore is no longer ‘illusorily’ two-dimensional. This kind of totally absorbing, ‘dream-like’ space, metaphorical for both painting and cinema, is illustrated by the episode ‘Crows’ in Kurosawa’s Dreams. Such a space requires the actual effacement of the empirical status of spectator, screen and film as separate dimensions, and it is precisely the 3D caracteristic unfolding of merely frontal space layers (and film events) out of the screen towards us (and sometimes above the heads of the spectators before us) that reinstalls at the core of the film-viewing phenomenon a regressive struggle with reality and with different degrees of realism, originally overcome by film since the Lumière’s Arrival of a Train at Ciotat seminal demonstration. Through an analysis of crucial aspects in Avatar and the recent Cave of Forgotten Dreams, both dealing with historical and ontological deepening processes of ‘going inside’, we shall try to show how the formal and technically advanced component of those 3D-depth films impairs, on the contrary, their apparent conceptual purpose on the level of contents, and we will assume, drawing on Merleau-Ponty and Deleuze, that this technological mistake is due to a lack of recognition of the nature of perception and sensation in relation to space and human experience.
Resumo:
The present thesis focuses on the on-fault slip distribution of large earthquakes in the framework of tsunami hazard assessment and tsunami warning improvement. It is widely known that ruptures on seismic faults are strongly heterogeneous. In the case of tsunamigenic earthquakes, the slip heterogeneity strongly influences the spatial distribution of the largest tsunami effects along the nearest coastlines. Unfortunately, after an earthquake occurs, the so-called finite-fault models (FFM) describing the coseismic on-fault slip pattern becomes available over time scales that are incompatible with early tsunami warning purposes, especially in the near field. Our work aims to characterize the slip heterogeneity in a fast, but still suitable way. Using finite-fault models to build a starting dataset of seismic events, the characteristics of the fault planes are studied with respect to the magnitude. The patterns of the slip distribution on the rupture plane, analysed with a cluster identification algorithm, reveal a preferential single-asperity representation that can be approximated by a two-dimensional Gaussian slip distribution (2D GD). The goodness of the 2D GD model is compared to other distributions used in literature and its ability to represent the slip heterogeneity in the form of the main asperity is proven. The magnitude dependence of the 2D GD parameters is investigated and turns out to be of primary importance from an early warning perspective. The Gaussian model is applied to the 16 September 2015 Illapel, Chile, earthquake and used to compute early tsunami predictions that are satisfactorily compared with the available observations. The fast computation of the 2D GD and its suitability in representing the slip complexity of the seismic source make it a useful tool for the tsunami early warning assessments, especially for what concerns the near field.
Resumo:
Due to increased interest in miniaturization, great attention has been given in the recent decade to the micro heat exchanging systems. Literature survey suggests that there is still a limited understanding of gas flows in micro heat exchanging systems. The aim of the current thesis is to further the understanding of fluid flow and heat transfer phenomenon inside such geometries when a compressible working fluid is utilized. A combined experimental and numerical approach has been utilized in order to overcome the lack of employable sensors for micro dimensional channels. After conducting a detailed comparison between various data reduction methodologies employed in the literature, the best suited methodology for gas microflow experimentalists is proposed. A transitional turbulence model is extensively validated against the experimental results of the microtubes and microchannels under adiabatic wall conditions. Heat transfer analysis of single microtubes showed that when the compressible working fluid is used, Nusselt number results are in partial disagreement with the conventional theory at highly turbulent flow regime for microtubes having a hydraulic diameter less than 250 microns. Experimental and numerical analysis on a prototype double layer microchannel heat exchanger showed that compressibility is detrimental to the thermal performance. It has been found that compressibility effects for micro heat exchangers are significant when the average Mach number at the outlet of the microchannel is greater than 0.1 compared to the adiabatic limit of 0.3. Lastly, to avoid a staggering amount of the computational power needed to simulate the micro heat exchanging systems with hundreds of microchannels, a reduced order model based on the porous medium has been developed that considers the compressibility of the gas inside microchannels. The validation of the proposed model against experimental results of average thermal effectiveness and the pressure loss showed an excellent match between the two.