835 resultados para Multi-layered analysis
Resumo:
The efficiency of track foundation material gradually decreases due to insufficient lateral confinement, ballast fouling, and loss of shear strength of the subsurface soil under cyclic loading. This paper presents characterization of rail track subsurface to identify ballast fouling and subsurface layers shear wave velocity using seismic survey. Seismic surface wave method of multi-channel analysis of surface wave (MASW) has been carried out in the model track and field track for finding out shear wave velocity of the clean and fouled ballast and track subsurface. The shear wave velocity (SWV) of fouled ballast increases with increase in fouling percentage, and reaches a maximum value and then decreases. This character is similar to typical compaction curve of soil, which is used to define optimum and critical fouling percentage (OFP and CFP). Critical fouling percentage of 15 % is noticed for Coal fouled ballast and 25 % is noticed for clayey sand fouled ballast. Coal fouled ballast reaches the OFP and CFP before clayey sand fouled ballast. Fouling of ballast reduces voids in ballast and there by decreases the drainage. Combined plot of permeability and SWV with percentage of fouling shows that after critical fouling point drainage condition of fouled ballast goes below acceptable limit. Shear wave velocities are measured in the selected location in the Wollongong field track by carrying out similar seismic survey. In-situ samples were collected and degrees of fouling were measured. Field SWV values are more than that of the model track SWV values for the same degree of fouling, which might be due to sleeper's confinement. This article also highlights the ballast gradation widely followed in different countries and presents the comparison of Indian ballast gradation with international gradation standards. Indian ballast contains a coarser particle size when compared to other countries. The upper limit of Indian gradation curve matches with lower limit of ballast gradation curves of America and Australia. The ballast gradation followed by Indian railways is poorly graded and more favorable for the drainage conditions. Indian ballast engineering needs extensive research to improve presents track conditions.
Resumo:
Graphene's nano-dimensional nature and excellent electron transfer properties underlie its electrocatalytic behavior towards certain substances. In this light, we have used graphene in the electrochemical detection of bisphenol A. Graphene sheets were produced via soft chemistry route involving graphite oxidation and chemical reduction. X-ray diffraction, Fourier transform infra-red (FT-IR) and Raman spectroscopy were used for the characterization of the as-synthesized graphene. Graphene exhibited amorphous structure in comparison with pristine graphite from XRD spectra. FTIR showed that graphene exhibits OH and COOH groups due to incomplete reduction. Raman spectroscopy revealed that multi-layered graphene was produced due to low intensity of the 2D-peak. Glassy carbon electrode was modified with graphene by a simple drop and dry method. Cyclic voltammetry was used to study the electrochemical properties of the prepared graphene-modified glassy carbon electrode using potassium ferricyanide as a redox probe. The prepared graphene- modified glassy carbon electrode exhibited more facile electron kinetics and enhanced current of about 75% when compared to the unmodified glassy carbon electrode. The modified electrode was used for the detection of bisphenol A. Under the optimum conditions, the oxidation peak current of bisphenol A varied linearly with concentration over a wide range of 5 x 10(-8) mol L-1 to 1 x 10(-6) mol L-1 and the detection limit of this method was as low as 4.689 x 10(-8) M. This method was also employed to determine bisphenol A in a real sample
Resumo:
The tonic is a fundamental concept in Indian art music. It is the base pitch, which an artist chooses in order to construct the melodies during a rg(a) rendition, and all accompanying instruments are tuned using the tonic pitch. Consequently, tonic identification is a fundamental task for most computational analyses of Indian art music, such as intonation analysis, melodic motif analysis and rg recognition. In this paper we review existing approaches for tonic identification in Indian art music and evaluate them on six diverse datasets for a thorough comparison and analysis. We study the performance of each method in different contexts such as the presence/absence of additional metadata, the quality of audio data, the duration of audio data, music tradition (Hindustani/Carnatic) and the gender of the singer (male/female). We show that the approaches that combine multi-pitch analysis with machine learning provide the best performance in most cases (90% identification accuracy on average), and are robust across the aforementioned contexts compared to the approaches based on expert knowledge. In addition, we also show that the performance of the latter can be improved when additional metadata is available to further constrain the problem. Finally, we present a detailed error analysis of each method, providing further insights into the advantages and limitations of the methods.
Resumo:
This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.
Resumo:
We performed Gaussian network model based normal mode analysis of 3-dimensional structures of multiple active and inactive forms of protein kinases. In 14 different kinases, a more number of residues (1095) show higher structural fluctuations in inactive states than those in active states (525), suggesting that, in general, mobility of inactive states is higher than active states. This statistically significant difference is consistent with higher crystallographic B-factors and conformational energies for inactive than active states, suggesting lower stability of inactive forms. Only a small number of inactive conformations with the DFG motif in the ``in'' state were found to have fluctuation magnitudes comparable to the active conformation. Therefore our study reports for the first time, intrinsic higher structural fluctuation for almost all inactive conformations compared to the active forms. Regions with higher fluctuations in the inactive states are often localized to the aC-helix, aG-helix and activation loop which are involved in the regulation and/or in structural transitions between active and inactive states. Further analysis of 476 kinase structures involved in interactions with another domain/protein showed that many of the regions with higher inactive-state fluctuation correspond to contact interfaces. We also performed extensive GNM analysis of (i) insulin receptor kinase bound to another protein and (ii) holo and apo forms of active and inactive conformations followed by multi-factor analysis of variance. We conclude that binding of small molecules or other domains/proteins reduce the extent of fluctuation irrespective of active or inactive forms. Finally, we show that the perceived fluctuations serve as a useful input to predict the functional state of a kinase.
Resumo:
Multidrug resistance is a major therapeutic challenge faced in the conventional chemotherapy. Nanocarriers are beneficial in the transport of chemotherapeutics by their ability to bypass the P-gp efflux in cancers. Most of the P-gp inhibitors under phase II clinical trial are facing failures and hence there is a need to develop a suitable carrier to address P-gp efflux in cancer therapy. Herein, we prepared novel protamine and carboxymethyl cellulose polyelectrolyte multi-layered nanocapsules modified with Fe3O4 nanoparticles for the delivery of doxorubicin against highly drug resistant HeLa cells. The experimental results revealed that improved cellular uptake, enhanced drug intensity profile with greater percentage of apoptotic cells was attained when doxorubicin loaded magnetic nanocapsules were used in the presence of external magnetic field. Hence, we conclude that this magnetic field assisted nanocapsule system can be used for delivery of chemotherapeutics for potential therapeutic efficacy at minimal dose in multidrug resistant cancers. From the Clinical Editor: Many cancer drugs fail when cancer cells become drug resistant. Indeed, multidrug resistance (MDR) is a major therapeutic challenge. One way that tumor cells attain MDR is by over expression of molecular pumps comprising of P-glycoprotein (P-gp) and multidrug resistant proteins (MRP), which can expel chemotherapeutic drugs out of the cells. In this study, the authors prepared novel protamine and carboxymethyl cellulose polyelectrolyte multi-layered nanocapsules modified with Fe3O4 nanoparticles for the delivery of doxorubicin. The results show that there was better drug delivery and efficacy even against MDR tumor cells. (C) 2015 Elsevier Inc. All rights reserved.
Resumo:
The one-mode analysis method on the pull-in instability of micro-structure under electrostatic loading is presented. Taylor series are used to expand the electrostatic loading term in the one-mode analysis method, which makes analytical solution available. The one-mode analysis is the combination of Galerkin method and Cardan solution of cubic equation. The one-mode analysis offers a direct computation method on the pull-in voltage and displacement. In low axial loading range, it shows little difference with the established multi-mode analysis on predicting the pull-in voltages for three different structures (cantilever, clamped-clamped beams and the plate with four edges simply-supported) studied here. For numerical multi-mode analysis, we also show that using the structural symmetry to select the symmetric mode can greatly reduce both the computation effort and the numerical fluctuation.
Resumo:
When the atomic force microscopy (AFM) in tapping mode is in intermittent contact with a soft substrate, the contact time can be a significant portion of a cycle, resulting in invalidity of the impact oscillator model, where the contact time is assumed to be infinitely small. Furthermore, we demonstrate that the AFM intermittent contact with soft substrate can induce the motion of higher modes in the AFM dynamic response. Traditional ways of modeling AFM (one degree of freedom (DOF) system or single mode analysis) are shown to have serious mistakes when applied to this kind of problem. A more reasonable displacement criterion on contact is proposed, where the contact time is a function of the mechanical properties of AFM and substrate, driving frequencies/amplitude, initial conditions, etc. Multi-modal analysis is presented and mode coupling is also shown. (c) 2006 Published by Elsevier Ltd.
Resumo:
通过与相关文献结果的对比,在验证了数值模拟应力波传播可行性的基础上,比较了应力波通过3层花岗岩和夹层为泡沫铝的3层介质后,发现后者应力波幅值的衰减远大于前者,应变能增加为前者的1.6倍,证明了软夹层在研究的速度量级上对能量耗散具有显著作用;在入射波波长为3层介质总厚度1/2的条件下,当泡沫铝的厚度占总厚度约0.2时,得出了3层介质的应变能约为系统总能量的60%,此时入射波波长为泡沫铝厚度的2.5倍,组合介质获得较佳的衰减性能.
“Deborah Numbers”, Coupling Multiple Space and Time Scales and Governing Damage Evolution to Failure
Resumo:
Two different spatial levels are involved concerning damage accumulation to eventual failure. nucleation and growth rates of microdamage nN* and V*. It is found that the trans-scale length ratio c*/L does not directly affect the process. Instead, two independent dimensionless numbers: the trans-scale one * * ( V*)including the * **5 * N c V including mesoscopic parameters only, play the key role in the process of damage accumulation to failure. The above implies that there are three time scales involved in the process: the macroscopic imposed time scale tim = /a and two meso-scopic time scales, nucleation and growth of damage, (* *4) N N t =1 n c and tV=c*/V*. Clearly, the dimensionless number De*=tV/tim refers to the ratio of microdamage growth time scale over the macroscopically imposed time scale. So, analogous to the definition of Deborah number as the ratio of relaxation time over external one in rheology. Let De be the imposed Deborah number while De represents the competition and coupling between the microdamage growth and the macroscopically imposed wave loading. In stress-wave induced tensile failure (spallation) De* < 1, this means that microdamage has enough time to grow during the macroscopic wave loading. Thus, the microdamage growth appears to be the predominate mechanism governing the failure. Moreover, the dimensionless number D* = tV/tN characterizes the ratio of two intrinsic mesoscopic time scales: growth over nucleation. Similarly let D be the “intrinsic Deborah number”. Both time scales are relevant to intrinsic relaxation rather than imposed one. Furthermore, the intrinsic Deborah number D* implies a certain characteristic damage. In particular, it is derived that D* is a proper indicator of macroscopic critical damage to damage localization, like D* ∼ (10–3~10–2) in spallation. More importantly, we found that this small intrinsic Deborah number D* indicates the energy partition of microdamage dissipation over bulk plastic work. This explains why spallation can not be formulated by macroscopic energy criterion and must be treated by multi-scale analysis.
Resumo:
The concept of biosensor with imaging ellipsometry was proposed about ten years ago. It has become an automatic analysis technique for protein detection with merits of label-free, multi-protein analysis, and real-time analysis for protein interaction process, etc. Its principle, andrelated technique units, such as micro-array, micro-fluidic and bio-molecule interaction cell, sampling unit and calibration for quantitative detection as well as its applications in biomedicine field are presented here.
Resumo:
Mytilus californianus (Mollusca: Bivalvia), the California marine mussel, occurs in intertidal populations so derise that they are referred to as "Mussel beds." The mussel beds range in physical complexity from structurally simple, essentially mono-layered assemblages, to structurally complex, multi-layered assemblages. The internal environment within the bed varies accordingly. The mussel bed provides either directly or indirectly, habitat, food and shelter for a large community of associated invertebrates. This study examines the relationship between physical complexity of the mussel bed habitat and composition of the associated community.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
[ES] Este trabajo profundiza en el estudio de los factores que influyen en la competitividad internacional de las nuevas empresas internacionales y, en consecuencia, en su resultado internacional. Aunando las disciplinas del emprendedurismo y del marketing internacional, se trata de remarcar la importancia del conocimiento relacional a través de la influencia de la orientación al mercado de la red en los resultados internacionales logrados por estas empresas en base al efector mediador de las ventajas competitivas. Los resultados obtenidos del contraste de hipótesis, mediante modelos de ecuaciones estructurales y análisis multi-muestra, confirman que la orientación al mercado de la red resulta determinante en la obtención de resultados internacionales superiores por parte de las nuevas empresas. Esta influencia se produce de forma indirecta a partir del efecto mediador de las ventajas competitivas en diferenciación y costes desarrolladas por las mismas. Este estudio extiende la investigación pasada en torno al emprendedurismo internacional, incluyendo nuevas aportaciones propias de la disciplina del marketing respecto a los antecedentes de la competitividad y los resultados de las nuevas empresas internacionales en los mercados exteriores. Además, los resultados obtenidos animan a emprendedores en el contexto internacional a considerar el valor explícito de otros factores distintos al conocimiento experiencial, que la empresa adquiere de forma gradual conforme se incrementa su experiencia en el mercado exterior, para darse cuenta del valor potencial que el conocimiento relacional asociado a la orientación al mercado de la red tiene como antecedente para la consecución de ventajas competitivas en el mercado internacional.
Resumo:
O crescente fluxo global de investimentos estrangeiros coloca o tema da regulação dos investimentos estrangeiros no cerne das preocupações do Direito Internacional. Em uma estrutura formal com diversos níveis, o Direito Internacional dos Investimentos passa por constantes readaptações e reconstruções. Diversas alternativas teóricas têm sido propostas para responder aos muitos questionamentos relativos ao futuro do Direito Internacional dos Investimentos. Ao longo das décadas, o Brasil optou por manter-se isolado do regime internacional de regulação de investimentos estrangeiros, de maneira que a questão permaneceu regulada inteiramente por um mosaico normativo disperso entre normas constitucionais e infraconstitucionais. O crescente papel do Brasil como país exportador de capitais especialmente em virtude da expansão da indústria do petróleo e gás levou à recente revisão das diretrizes de política externa em matéria de investimentos estrangeiros. A decisão de negociar acordos internacionais de investimentos pode trazer diversas consequências para o ordenamento jurídico doméstico, dentre as quais se destaca a interferência do padrão de tratamento justo e equitativo no exercício do poder regulatório pelo Estado. A recorrente invocação do padrão de tratamento justo e equitativo contrasta com as incertezas sobre seu conteúdo. Ainda que possa existir uma compatibilidade teórica entre esse padrão de tratamento e o Direito brasileiro, a exposição às interpretações criativas dos tribunais arbitrais pode representar um risco para o Brasil, que deve cuidadosamente avaliar a pertinência de incluir uma cláusula do padrão de tratamento justo e equitativo nos acordos atualmente em negociação.