950 resultados para authenticity label
Resumo:
Part II - Christoph Neuenschwander: Language ideologies in the legitimisation of Tok Pisin as a lingua franca Pidgins and Creoles all over the world seem to share common aspects in the historical circumstances of their genesis and evolution. They all emerged in the context of colonialism, in which not only colonisers and colonised, but also the various groups of the colonised population spoke different languages. Pidgins and Creoles, quite simply, resulted from the need to communicate.¬¬ Yet, the degree to which they became accepted as a lingua franca or in fact even as a linguistic variety in its own right, strikingly differs from variety to variety. The current research project focuses on two Pacific Creoles: Tok Pisin, spoken on Papua New Guinea, and Hawai'i Creole English (HCE). Whereas Tok Pisin is a highly stabilised and legitimised variety, used as a lingua franca in one of the most linguistically diverse countries on Earth, HCE seems to be regarded as nothing more than broken English by a vast majority of the Hawai'ian population. The aim of this project is to examine the metalinguistic comments about both varieties and to analyse the public discourses, in which the status of Tok Pisin and HCE were and still are negotiated. More precisely, language ideologies shall be identified and compared in the two contexts. Ultimately, this might help us understand the mechanisms that underlie the processes of legitimisation or stigmatisation. As Laura Tresch will run a parallel research project on language ideologies on new dialects (New Zealand English and Estuary English), a comparison between the findings of both projects may produce even more insights into those mechanisms. The next months of the project will be dedicated to investigating the metalinguistic discourse in Papua New Guinea. In order to collect a wide range of manifestations of language ideologies, i.e. instances of (lay and academic) commentary on Tok Pisin, it makes sense to look at a relatively large period of time and to single out events that are likely to have stimulated such manifestations. In the history of Papua New Guinea - and in the history of Tok Pisin, in particular - several important social and political events concerning the use and the status of the language can be detected. One example might be public debates on education policy. The presentation at the CSLS Winter School 2014 will provide a brief introduction to the history of Tok Pisin and raise the methodological question of how to spot potential sites of language-ideological production.
Resumo:
OBJECTIVE In Europe, growth hormone (GH) treatment for children born small for gestational age (SGA) can only be initiated after 4 years of age. However, younger age at treatment initiation is a predictor of favourable response. To assess the effect of GH treatment on early growth and cognitive functioning in very young (<30 months), short-stature children born SGA. DESIGN A 2-year, randomized controlled, multicentre study (NCT00627523; EGN study), in which patients received either GH treatment or no treatment for 24 months. PATIENTS Children aged 19-29 months diagnosed as SGA at birth, and for whom sufficient early growth data were available, were eligible. Patients were randomized (1:1) to GH treatment (Genotropin(®) , Pfizer Inc.) at a dose of 0·035 mg/kg/day by subcutaneous injection, or no treatment. MEASUREMENTS The primary objective was to assess the change from baseline in height standard deviation score (SDS) after 24 months of GH treatment. RESULTS Change from baseline in height SDS was significantly greater in the GH treatment vs control group at both month 12 (1·03 vs 0·14) and month 24 (1·63 vs 0·43; both P < 0·001). Growth velocity SDS was significantly higher in the GH treatment vs control group at 12 months (P < 0·001), but not at 24 months. There was no significant difference in mental or psychomotor development indices between the two groups. CONCLUSIONS GH treatment for 24 months in very young short-stature children born SGA resulted in a significant increase in height SDS compared with no treatment.
Resumo:
Workshop Overview The use of special effects (moulage) is a way to augment the authenticity of a scenario in simulation. This workshop will introduce different techniques of moulage (oil based cream colors, watercolors, transfer tattoos and 3D Prosthetics). The participants will have the opportunity to explore these techniques by applying various moulages. They will compare the techniques and discuss their advantages and disadvantages. Moreover, strategies for standardization and quality assurance will be discussed. Workshop Rationale Moulage supports the sensory perception in an scenario (1). It can provide evaluation clues (2) and help learners (and SPs) to engage in the simulation. However, it is of crucial importance that the simulated physical pathologies are represented accurate and reliable. Accuracy is achieved by using the appropriate technique, which requires knowledge and practice . With information about different moulage techniques, we hope to increases the knowledge of moulage during the workshop. By applying moulages in various techniques we will practice together. As standardization is critical for simulation scenarios in assessment (3, 4) strategies for standardization of moulage will be introduced and discussed. Workshop Objectives During the workshop participants will: - gain knowledge about different techniques of moulages - practice moulages in various techniques - discuss the advantages and disadvantages of moulage techniques - describe strategies for standardization and quality assurance of moulage Planned Format 5 min Introduction 15 min Overview – Background & Theory (presentation) 15 min Application of moulage for ankle sprain in 4 different techniques (oil based cream color, water color, temporary tatoo, 3D prosthetic) in small groups 5 min Comparing the results by interactive viewing of prepared moulages 15 min Application of moulages for burn in different techniques in small groups 5 min Comparing results the results by interactive viewing of prepared moulages 5 min Sharing experiences with different techniques in small groups 20 min Discussion of the techniques including standardization and quality assurance strategies (plenary discussion) 5 min Summary / Take home points
Resumo:
We have recently demonstrated a biosensor based on a lattice of SU8 pillars on a 1 μm SiO2/Si wafer by measuring vertically reflectivity as a function of wavelength. The biodetection has been proven with the combination of Bovine Serum Albumin (BSA) protein and its antibody (antiBSA). A BSA layer is attached to the pillars; the biorecognition of antiBSA involves a shift in the reflectivity curve, related with the concentration of antiBSA. A detection limit in the order of 2 ng/ml is achieved for a rhombic lattice of pillars with a lattice parameter (a) of 800 nm, a height (h) of 420 nm and a diameter(d) of 200 nm. These results correlate with calculations using 3D-finite difference time domain method. A 2D simplified model is proposed, consisting of a multilayer model where the pillars are turned into a 420 nm layer with an effective refractive index obtained by using Beam Propagation Method (BPM) algorithm. Results provided by this model are in good correlation with experimental data, reaching a reduction in time from one day to 15 minutes, giving a fast but accurate tool to optimize the design and maximizing sensitivity, and allows analyzing the influence of different variables (diameter, height and lattice parameter). Sensitivity is obtained for a variety of configurations, reaching a limit of detection under 1 ng/ml. Optimum design is not only chosen because of its sensitivity but also its feasibility, both from fabrication (limited by aspect ratio and proximity of the pillars) and fluidic point of view. (© 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
Resumo:
In previous works we demonstrated the benefits of using micro–nano patterning materials to be used as bio-photonic sensing cells (BICELLs), referred as micro–nano photonic structures having immobilized bioreceptors on its surface with the capability of recognizing the molecular binding by optical transduction. Gestrinone/anti-gestrinone and BSA/anti-BSA pairs were proven under different optical configurations to experimentally validate the biosensing capability of these bio-sensitive photonic architectures. Moreover, Three-Dimensional Finite Difference Time Domain (FDTD) models were employed for simulating the optical response of these structures. For this article, we have developed an effective analytical simulation methodology capable of simulating complex biophotonic sensing architectures. This simulation method has been tested and compared with previous experimental results and FDTD models. Moreover, this effective simulation methodology can be used for efficiently design and optimize any structure as BICELL. In particular for this article, six different BICELL's types have been optimized. To carry out this optimization we have considered three figures of merit: optical sensitivity, Q-factor and signal amplitude. The final objective of this paper is not only validating a suitable and efficient optical simulation methodology but also demonstrating the capability of this method for analyzing the performance of a given number of BICELLs for label-free biosensing.
Resumo:
Label free immunoassay sector is a ferment of activity, experiencing rapid growth as new technologies come forward and achieve acceptance. The landscape is changing in a “bottom up” approach, as individual companies promote individual technologies and find a market for them. Therefore, each of the companies operating in the label-free immunoassay sector offers a technology that is in some way unique and proprietary. However, no many technologies based on Label-free technology are currently in the market for PoC and High Throughput Screening (HTS), where mature labeled technologies have taken the market.
Resumo:
The field of optical label free biosensors has become a topic of interest during past years, with devices based on the detection of angular or wavelength shift of optical modes [1]. Common parameters to characterize their performance are the Limit of Detection (LOD, defined as the minimum change of refractive index upon the sensing surface that the device is able to detect, and also BioLOD, which represents the minimum amount of target analyte accurately resolved by the system; with units of concentration (common un its are p pm, ng/ml, or nM). LOD gives a first value to compare different biosensors, and is obtained both theoretically (using photonic calculation tools), and experimentally,covering the sensing area with fluids of different refractive indexes.
Resumo:
Los sectores de detección biológica demandan continuamente técnicas de análisis y diagnóstico más eficientes y precisas para identificar enfermedades y desarrollar nuevos medicamentos. Actualmente se considera que hay una gran necesidad de desarrollar herramientas de diagnóstico capaces de asegurar sensibilidad, rapidez, sencillez y asequibilidad para aplicaciones en sectores como la salud, la alimentación, el medioambiente o la seguridad. En el ámbito clínico se necesitan profundos avances tecnológicos capaces de ofrecer análisis rápidos, exactos, fiables y asequibles en coste y que tengan como consecuencia la mejora clínica y económica a partir de un diagnóstico eficiente. En concreto, hay un interés creciente por la descentralización del diagnóstico clínico mediante plataformas de detección cercanas al usuario final, denominadas POCs (Point Of Care devices). La utilización de POCs (referidas al diagnóstico cercano al usuario final o fuera del laboratorio de análisis clínico), mediante detección in vitro (IVD), será extremadamente útil en centros de salud, clínicas o unidades hospitalarias, entornos laborales o incluso en el hogar. Por otra parte, el desarrollo de la genómica, proteómica y otras tecnologías conocidas como “omics” (sufijo en inglés para referirse, por ejemplo, a genomics, transcriptomics, proteomics, metabolomics, lipidomics) está incrementando la demanda de nuevas tecnologías mucho más avanzadas con una clara orientación hacia la medicina personalizada y la necesidad de hacer frente a cambios en los tratamientos en el caso de enfermedades complejas. Desde hace poco tiempo se han definido las Celdas Biofónicas (BICELLs) como una metodología novedosa para la detección de agentes biológicos que ofrecen una serie de características que las hacen interesantes como son: Capacidad de multiplexación, alta sensibilidad, posibilidad de medir en gota, compatible con otras tecnologías. En este trabajo se hace un estudio y optimización sobre diferentes tipos de BICELLs y se valoran una serie de figuras de merito a tener en cuenta desde el punto de vista del lector óptico a emplear.
Resumo:
The use of Biophotonic Sensing Cells (BICELLs) based on micro-nano pattemed photonic architectures has been recently proven as an efficient methodology for label-free biosensing by using Optical Interrogation [1]. According to this, we have studied the different optical response for a specific typology of BICELL, consisting of structures of SU -8. This material is biocompatible with different types of biomolecules and can be immobilized on its sensing surface. In particular, we have measured the optical response for a biomarker in clinic diagnostic of dry eye. Although different proteins can be enstudied such as: PRDX5, ANXA 1, ANXA 11, CST 4, PLAA Y S 1 OOA6 related with ocular surface (dry eye), for this work PLAA (phospholipase A2) is studied by means of label free biosensing based on BICELLs for analyzing the performance and specificity according with means values of concentration in ROC curves.
Resumo:
Multi-label classification (MLC) is the supervised learning problem where an instance may be associated with multiple labels. Modeling dependencies between labels allows MLC methods to improve their performance at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies. On the one hand, the original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors down the chain. On the other hand, a recent Bayes-optimal method improves the performance, but is computationally intractable in practice. Here we present a novel double-Monte Carlo scheme (M2CC), both for finding a good chain sequence and performing efficient inference. The M2CC algorithm remains tractable for high-dimensional data sets and obtains the best overall accuracy, as shown on several real data sets with input dimension as high as 1449 and up to 103 labels.
Resumo:
The aim of this paper is to develop a probabilistic modeling framework for the segmentation of structures of interest from a collection of atlases. Given a subset of registered atlases into the target image for a particular Region of Interest (ROI), a statistical model of appearance and shape is computed for fusing the labels. Segmentations are obtained by minimizing an energy function associated with the proposed model, using a graph-cut technique. We test different label fusion methods on publicly available MR images of human brains.
Resumo:
La ecología no solamente ha puesto de manifiesto problemas ambientales, sino que ha confirmado la necesidad de una nueva armonía entre los propios seres humanos y de éstos con la naturaleza y con todos los seres que la habitan. Es necesario un nuevo contrato que determine nuestras relaciones con la Naturaleza (Serrs1), y una nueva Ética para nuestras vidas (Guattari2). La ética medioambiental nos ha dado una visión universal y supra-generacional de la gestión de la naturaleza y como consecuencia, una nueva forma de construir nuestra ‘segunda’ Naturaleza, que es la arquitectura. ¿Qué es lo esencial que esta nueva ética exige para la arquitectura? Este es un momento crucial para reconsiderar los objetivos de la arquitectura, porque lo ‘eco’ está produciendo grandes cambios. ¿Implica esta era post-ecológica una particular ética, es decir, referida a sus fines y medios? ¿Porqué, para qué, para quién, cómo debemos hacer la arquitectura de nuestro tiempo? Es momento de afrontar críticamente el discurso de la eco-arquitectura, e incluso de repensar los propios límites de la arquitectura. El desarrollo actual del conocimiento medioambiental es esencialmente técnico y utilitario, pero ¿es el reto solamente técnico?¿Es suficiente la suma de lo medioambiental-social-económico-cultural para definirla? ¿Hay claves que nos puedan dar la dimensión ética de esta aproximación técnica-empírica? ¿Sabemos lo que estamos haciendo cuando aplicamos este conocimiento? Y, sobre todo, ¿cuál es el sentido de lo que estamos haciendo? La tesis que se propone puede resumirse: De acuerdo con el actual conocimiento que tenemos de la Naturaleza, la Arquitectura de nuestro tiempo deber reconsiderar sus fines y medios, puesto que la ética medioambiental está definiendo nuevos objetivos. Para fundamentar y profundizar en esta afirmación la tesis analiza cómo son hoy día las relaciones entre Ética-Naturaleza-Arquitectura (Fig.1), lo que facilitará las claves de cuáles son los criterios éticos (en cuanto a fines y medios) que deben definir la arquitectura del tiempo de la ecología. ABSTRACT Ecology shows us not only environmental problems; it shows that we need a new balance and harmony between individuals, beings, communities and Nature. We need a new contract with Nature according to Serres576, and a new Ethics for our lives according to Guattari577. Environmental ethics have given us a universal and supra-generational vision of the management of our Nature and, as a consequence, a new way to construct our ‘second’ nature, which is architecture. What is essential for this new architecture that the new ethics demand? This is a critical moment to reconsider the object of architecture, because the ‘eco’ is making significant changes in it. Are there any specifically ethical concerns (ends and means) in the post-ecological era? Why, for what, for whom, how should we make architecture in our times? This is the time to approach the eco-architectural discourse critically and to question the current boundaries of architecture itself: Where is eco-architecture going? The current development of environmental knowledge is essentially technical and utilitarian, but it is its technical aspect the only challenge? Is the sum of environmental-social-economic aspects enough to define it? Are there any clues which can give an ethical sense to this technical-empirical approach? Do we know what we are doing when we apply this knowledge? And overall, what is the meaning of what we are doing? Exploring this subject, this thesis makes a statement: In accordance with the actual knowledge of Nature, Architecture of our time must reconsider its ends and means, since the environmental ethics is defining new objectives. To support that, the thesis analyzes what the relationships between Ethics –Nature- Architecture (Fig. 53) are like nowadays, this will provide the clues of which ethical criteria (ends and means) must architecture of an ecological era define.
Resumo:
Bayesian network classifiers are widely used in machine learning because they intuitively represent causal relations. Multi-label classification problems require each instance to be assigned a subset of a defined set of h labels. This problem is equivalent to finding a multi-valued decision function that predicts a vector of h binary classes. In this paper we obtain the decision boundaries of two widely used Bayesian network approaches for building multi-label classifiers: Multi-label Bayesian network classifiers built using the binary relevance method and Bayesian network chain classifiers. We extend our previous single-label results to multi-label chain classifiers, and we prove that, as expected, chain classifiers provide a more expressive model than the binary relevance method.
Resumo:
Abstract Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists’ classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features.
Resumo:
Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists’ classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features.