845 resultados para Box-Cox transformation and quintile-based capability indices
Resumo:
The main purpose of this research is to identify the hidden knowledge and learning mechanisms in the organization in order to disclosure the tacit knowledge and transform it into explicit knowledge. Most firms usually tend to duplicate their efforts acquiring extra knowledge and new learning skills while forgetting to exploit the existing ones thus wasting one life time resources that could be applied to increase added value within the firm overall competitive advantage. This unique value in the shape of creation, acquisition, transformation and application of learning and knowledge is not disseminated throughout the individual, group and, ultimately, the company itself. This work is based on three variables that explain the behaviour of learning as the process of construction and acquisition of knowledge, namely internal social capital, technology and external social capital, which include the main attributes of learning and knowledge that help us to capture the essence of this symbiosis. Absorptive Capacity provides the right tool to explore this uncertainty within the firm it is possible to achieve the perfect match between learning skills and knowledge needed to support the overall strategy of the firm. This study has taken in to account a sample of the Portuguese textile industry and it is based on a multisectorial analysis that makes it possible a crossfunctional analysis to check on the validity of results in order to better understand and capture the dynamics of organizational behavior.
Resumo:
In recent years Ionic Liquids (ILs) are being applied in life sciences. ILs are being produce with active pharmaceutical drugs (API) as they can reduce polymorphism and drug solubility problems [1] Also ILs are being applied as a drug delivery device in innovative therapies What is appealing in ILs is the ILs building up platform, the counter-ion can be carefully chosen in order to avoid undesirable side effects or to give innovative therapies in which two active ions are paired. This work shows ILs based on ampicillin (an anti-bacterial agent) and ILs based on Amphotericin B. Also we show studies that indicate that ILs based on Ampicillin could reverse resistance in some bacteria. The ILs produced in this work were synthetized by the neutralization method described in Ferraz et. al. [2] Ampicillin anion was combined with the following organic cations 1-ethyl-3-methylimidazolium, [EMIM]; 1-hydroxy-ethyl-3-methylimidazolium, [C2OHMIM]; choline, [cholin]; tetraethylammonium, [TEA]; cetylpyridinium, [C16pyr] and trihexyltetradecylphosphonium, [P6,6,6,14]. Amphotericin B was combined with [C16pyr], [cholin] and 1-metohyethyl-3-methylimidazolium, [C3OMIM]. The ILs-APIs based on ampicillin[2] were tested against sensitive Gram-negative bacteria Escherichia coli ATCC 25922 and Klebsiella pneumonia (clinical isolated), as well as on Gram positive Staphylococcus Aureus ATCC 25923, Staphylococcus epidermidis and Enterococcus faecalis. The arising resistance developed by bacteria to antibiotics is a serious public health threat and needs new and urgent measures. We study the bacterial activity of these compounds against a panel of resistant bacteria (clinical isolated strains): E. coli CTX M9, E. coli TEM CTX M9, E. coli TEM1, E. coli CTX M2, E. coli AmpC Mox2. In this work we demonstrate that is possible to produce ILs from anti-bacterial and anti-fungal compounds. We show here that the new ILs can reverse the bacteria resistance. With the careful choice of the organic cation, it is possible to create important biological and physic-chemical properties. This work also shows that the ion-pair is fundamental in ampicillin mechanism of action.
Resumo:
Phenolic acids are ubiquitous antioxidants accounting for approximately one third of the phenolic compounds in our diet. Their importance was supported by epidemiological studies that suggest an inverse relationship between dietary intake of phenolic antioxidants and the occurrence of diseases, such as cancer and neurodegenerative disorders. However, until now, most of natural antioxidants have limited therapeutic success a fact that could be related with their limited distribution throughout the body and with the inherent difficulties to attain the target sites. The development of phenolic antioxidants based on a hybrid concept and structurally based on natural hydroxybenzoic (gallic acid) and hydroxycinnamic (caffeic acid) scaffolds seems to be a suitable solution to surpass the mentioned drawbacks. Galloylecinnamic hybrids were synthesized and their antioxidant activity as well as partition coefficients and redox potentials evaluated. The structureepropertyeactivity relationship (SPAR) study revealed the existence of a correlation between the redox potentials and antioxidant activity. The galloylecinnamic acid hybrid stands out as the best antioxidant supplementing the effect of a blend of gallic acid plus caffeic acid endorsing the hypothesis that the whole is greater than the sum of the parts. In addition, some hybrid compounds possess an appropriate lipophilicity allowing their application as chain-breaking antioxidant in biomembranes or other type of lipidic systems. Their predicted ADME properties are also in accordance with the general requirements for drug-like compounds. Accordingly, these phenolic hybrids can be seen as potential antioxidants for tackling the oxidative status linked to the neurodegenerative, inflammatory or cancer processes.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia do Ambiente, perfil de Engenharia Ecológica
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
A study was conducted on 16 patients with pemphigus foliaceus, ten of them with the localized form (group G1) and six with the disseminated form (group G2). These patients were submitted to full blood counts, quantitation of mononuclear cell subpopulations by monoclonal antibodies, study of blastic lymphocyte transformation, and quantitation of circulating antibodies by the indirect immunofluorescence test, in order to correlate their clinical signs and symptoms and laboratory data with their immunological profile, and to determine the relationship between circulating autoantibody titers and lesion intensity and course of lesions under treatment. Leucocytosis was observed especially in group G2. All patients showed decreased relative CD3+ and CD4+ values and a tendency to decreased relative values of the CD8+ subpopulation. Blastic lymphocyte transformation indices in the presence of phytohemagglutinin were higher in patients (group G1+G2) than in controls. The indirect immunofluorescence test was positive in 100% of G2 patients and in 80% of G1 patients. The median value for the titers was higher in group G2 than in group G1. Analysis of the results as a whole permits us to conclude that cell immunity was preserved and that there was a relationship between antibody titers detected by the direct immunofluorescence test and extent of skin lesions.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Understanding how the brain works has been one of the greatest goals of mankind. This desire fuels the scientific community to pursue novel techniques able to acquire the complex information produced by the brain at any given moment. The Electrocorticography (ECoG) is one of those techniques. By placing conductive electrodes over the dura, or directly over the cortex, and measuring the electric potential variation, one can acquire information regarding the activation of those areas. In this work, transparent ECoGs, (TrECoGs) are fabricated through thin film deposition of the Transparent Conductive Oxides (TCOs) Indium-Zinc-Oxide (IZO) and Gallium-Zinc-Oxide (GZO). Five distinct devices have been fabricated via shadow masking and photolithography. The data acquired and presented in this work validates the TrECoGs fabricated as efficient devices for recording brain activity. The best results were obtained for the GZO- based TrECoG, which presented an average impedance of 36 kΩ at 1 kHz for 500 μm diameter electrodes, a transmittance close to 90% for the visible spectrum and a clear capability to detect brain signal variations. The IZO based devices also presented high transmittance levels (90%), but with higher impedances, which ranged from 40 kΩ to 100 kΩ.
Resumo:
In: A. Cunha, E. Kindler (eds.): Proceedings of the Fourth International Workshop on Bidirectional Transformations (Bx 2015), L’Aquila, Italy, July 24, 2015, published at http://ceur-ws.org
Resumo:
AbstractBackground:30-40% of cardiac resynchronization therapy cases do not achieve favorable outcomes.Objective:This study aimed to develop predictive models for the combined endpoint of cardiac death and transplantation (Tx) at different stages of cardiac resynchronization therapy (CRT).Methods:Prospective observational study of 116 patients aged 64.8 ± 11.1 years, 68.1% of whom had functional class (FC) III and 31.9% had ambulatory class IV. Clinical, electrocardiographic and echocardiographic variables were assessed by using Cox regression and Kaplan-Meier curves.Results:The cardiac mortality/Tx rate was 16.3% during the follow-up period of 34.0 ± 17.9 months. Prior to implantation, right ventricular dysfunction (RVD), ejection fraction < 25% and use of high doses of diuretics (HDD) increased the risk of cardiac death and Tx by 3.9-, 4.8-, and 5.9-fold, respectively. In the first year after CRT, RVD, HDD and hospitalization due to congestive heart failure increased the risk of death at hazard ratios of 3.5, 5.3, and 12.5, respectively. In the second year after CRT, RVD and FC III/IV were significant risk factors of mortality in the multivariate Cox model. The accuracy rates of the models were 84.6% at preimplantation, 93% in the first year after CRT, and 90.5% in the second year after CRT. The models were validated by bootstrapping.Conclusion:We developed predictive models of cardiac death and Tx at different stages of CRT based on the analysis of simple and easily obtainable clinical and echocardiographic variables. The models showed good accuracy and adjustment, were validated internally, and are useful in the selection, monitoring and counseling of patients indicated for CRT.
Resumo:
Although melanin is the most common pigment in animal integuments, the adaptive function of variation in melanin-based coloration remains poorly understood. The individual fitness returns associated with melanin pigments can be variable across species as these pigments can have physical and biological protective properties and genes involved in melanogenesis may vary in the intensity of pleiotropic effects. Moreover, dark and pale coloration can also enhance camouflage in alternative habitats and melanin-based coloration can be involved in social interactions. We investigated whether darker or paler individuals achieve a higher fitness in birds, a taxon wherein associations between melanin-based coloration and fitness parameters have been studied in a large number of species. A meta-analysis showed that the degree of melanin-based coloration was not significantly associated with laying date, clutch size, brood size, and survival across 26 species. Similar results were found when restricting the analyses to non-sexually dimorphic birds, colour polymorphic and monomorphic species, in passerines and non-passerines and in species for which inter-individual variation in melanism is due to colour intensity. However, eumelanic coloration was positively associated with clutch and brood size in sexually dimorphic species and those that vary in the size of black patches, respectively. Given that greater extent of melanin-based coloration was positively associated with reproductive parameters and survival in some species but negatively in other species, we conclude that in birds the sign and magnitude of selection exerted on melanin-based coloration is species- or trait-specific.
Resumo:
ABSTRACT The drug discovery process has been profoundly changed recently by the adoption of computational methods helping the design of new drug candidates more rapidly and at lower costs. In silico drug design consists of a collection of tools helping to make rational decisions at the different steps of the drug discovery process, such as the identification of a biomolecular target of therapeutical interest, the selection or the design of new lead compounds and their modification to obtain better affinities, as well as pharmacokinetic and pharmacodynamic properties. Among the different tools available, a particular emphasis is placed in this review on molecular docking, virtual high throughput screening and fragment-based ligand design.
Resumo:
OBJECTIVES: We examined the correlation between the quantitative margin analysis of two laboratory test methods (Berlin, Zurich) and the clinical outcome in Class V restorations. METHODS: Prospective clinical studies with an observation period of at least 18 months were searched in the literature, for which laboratory data were also available. The clinical outcome variables were retention loss, marginal discoloration, detectable margins and secondary caries. Forty-four clinical studies matched the inclusion criteria, including 34 adhesive systems for which laboratory data were also present. For both laboratory test methods and the clinical studies, an index was formulated to better compare the in vitro and in vivo results. Linear mixed models which included a random study effect were calculated. As most clinical data were available for 12 and 24 months, the main analysis was restricted to these recall intervals. RESULTS: The comparative analysis revealed a weak correlation between the clinical index and both in vitro indices. The correlation was statistically significant for the Berlin method but not for the Zurich method and only present if studies were compared which used the same composite in the in vitro and in vivo study. When defining specific cut-off values, the prognosis for the good clinical performance of an adhesive system based on in vitro results was 78% (Berlin) or 100% (Zurich). For poor performance it was 67% and 60%, respectively. No correlation was found between both in vitro methods. SIGNIFICANCE: The surrogate parameter "marginal adaptation" of restorations placed in extracted teeth has a mediocre value to predict the clinical performance of an adhesive system in cervical cavities. The composite is an important factor for a successful prediction. The comparison between in vitro/in vivo is sometimes hampered by the great variability of clinical results on the same adhesive system.
Resumo:
This paper presents an analysis of motor vehicle insurance claims relating to vehicle damage and to associated medical expenses. We use univariate severity distributions estimated with parametric and non-parametric methods. The methods are implemented using the statistical package R. Parametric analysis is limited to estimation of normal and lognormal distributions for each of the two claim types. The nonparametric analysis presented involves kernel density estimation. We illustrate the benefits of applying transformations to data prior to employing kernel based methods. We use a log-transformation and an optimal transformation amongst a class of transformations that produces symmetry in the data. The central aim of this paper is to provide educators with material that can be used in the classroom to teach statistical estimation methods, goodness of fit analysis and importantly statistical computing in the context of insurance and risk management. To this end, we have included in the Appendix of this paper all the R code that has been used in the analysis so that readers, both students and educators, can fully explore the techniques described
Resumo:
SUMMARYIntercellular communication is achieved at specialized regions of the plasma membrane by gap junctions. The proteins constituting the gap junctions are called connexins and are encoded by a family of genes highly conserved during evolution. In adult mouse, four connexins (Cxs) are known to be expressed in the vasculature: Cx37, Cx40, Cx43 and Cx45. Several recent studies have provided evidences that vascular connexins expression and blood pressure regulation are closely linked, suggesting a role for connexins in the control of blood pressure. However, the precise function that each vascular connexin plays under physiological and pathophysiological conditions is still not elucidated. In this context, this work was dedicated to evaluate the contribution of each of the four vascular connexins in the control of the vascular function and in the blood pressure regulation.In the present work, we first demonstrated that vascular connexins are differently regulated by hypertension in the mouse aorta. We also observed that endothelial connexins play a regulatory role on eNOS expression levels and function in the aorta, therefore in the control of vascular tone. Then, we demonstrated that Cx40 plays a pivotal role in the kidney by regulating the renal levels of COX-2 and nNOS, two key enzymes of the macula densa known to participate in the control of renin secreting cells. We also found that Cx43 forms the functional gap junction involved in intercellular Ca2+ wave propagation between vascular smooth muscle cells. Finally, we have started to generate transgenic mice expressing specifically Cx40 in the endothelium to investigate the involvement of Cx40 in the vasomotor tone, or in the renin secreting cells to evaluate the role of Cx40 in the control of renin secretion.In conclusion, this work has allowed us to identify new roles for connexins in the vasculature. Our results suggest that vascular connexins could be interesting targets for new therapies caring hypertension and vascular diseases.