949 resultados para Localization open issues
Resumo:
GABA-A receptors are chloride ion channels composed of five subunits, mediating fast synaptic and tonic inhibition in the mammalian brain. 19 different subunit isoforms have been identified, with the major receptor type in mammalian adult brain consisting of α1, β2, and γ2 subunits. GABA-A receptors are the target of numerous sedating and anxiolytic drugs such as benzodiazepines. The currently known endogenous ligands are GABA, neurosteroids and the endocannabinoid 2- arachidonoyl glycerol (2-AG). The pharmacological properties of this chloride ion channel strictly depend on receptor subunit composition and arrangement. GABA-A receptors bind and are inhibited by epileptogenic agents such as picrotoxin, and cyclodiene insecticides such as dieldrin. We screened aromatic monovalent anions with five-fold symmetry for inhibition of GABA-A receptors. One of the anions, PCCPinhibited currents elicited by GABA with comparable potency as picrotoxin. This inhibition showed all characteristics of an open channel block. The GABA-A receptor ion channel is lined by residues from the M2 membrane-spanning segment. To identify important residues of the pore involved in the interaction with the blocking molecules PCCP-, a mutation scan was performed in combination with subsequent analysis of the expressed mutant proteins using electrophysiological techniques. In a second project we characterised a light-switchable modulator of GABA-A receptors based on propofol. It was my responsibility to investigate the switching kinetics in patch clamp experiments. After its discovery in 1980, propofol has become the most widely used intravenous general anaesthetic. It is commonly accepted that the anaesthesia induced by this unusually lipophilic drug mostly results from potentiation of GABA induced currents. While GABA-A receptors respond to a variety of ligands, they are normally not sensitive towards light. This light sensitivity could be indirectly achieved by using modulators that can be optically switched between an active and an inactive form. We tested an azobenzene derivative of propofol where an aryldiazene unit is directly coupled to the pharmacophore. This molecule was termed azopropofol (AP2). The effect of AP2 on Cl- currents was investigated with electrophysiological techniques using α1β2γ2 GABA-A receptors expressed in Xenopus oocytes and HEK-cells. In the third project we wanted to investigate the functional role of GABA-A receptors in the liver, and their possible involvement in cell proliferation. GABA-A receptors are also found in a wide range of peripheral tissues, including parts of the peripheral nervous system and non-neural tissues such as smooth muscle, the female reproductive system, liver and several cancer tissues. However their precise function in non neuronal or cancerous cells is still unknown. For this purpose we investigated expression, localization and function of the hepatocytes GABA-A receptors in model cell lines and healthy and cancerous hepatocytes.
Resumo:
Meditation is a self-induced and willfully initiated practice that alters the state of consciousness. The meditation practice of Zazen, like many other meditation practices, aims at disregarding intrusive thoughts while controlling body posture. It is an open monitoring meditation characterized by detached moment-to-moment awareness and reduced conceptual thinking and self-reference. Which brain areas differ in electric activity during Zazen compared to task-free resting? Since scalp electroencephalography (EEG) waveforms are reference-dependent, conclusions about the localization of active brain areas are ambiguous. Computing intracerebral source models from the scalp EEG data solves this problem. In the present study, we applied source modeling using low resolution brain electromagnetic tomography (LORETA) to 58-channel scalp EEG data recorded from 15 experienced Zen meditators during Zazen and no-task resting. Zazen compared to no-task resting showed increased alpha-1 and alpha-2 frequency activity in an exclusively right-lateralized cluster extending from prefrontal areas including the insula to parts of the somatosensory and motor cortices and temporal areas. Zazen also showed decreased alpha and beta-2 activity in the left angular gyrus and decreased beta-1 and beta-2 activity in a large bilateral posterior cluster comprising the visual cortex, the posterior cingulate cortex and the parietal cortex. The results include parts of the default mode network and suggest enhanced automatic memory and emotion processing, reduced conceptual thinking and self-reference on a less judgmental, i.e., more detached moment-to-moment basis during Zazen compared to no-task resting.
Resumo:
PURPOSE Hodgkin lymphoma (HL) is a highly curable disease. Reducing late complications and second malignancies has become increasingly important. Radiotherapy target paradigms are currently changing and radiotherapy techniques are evolving rapidly. DESIGN This overview reports to what extent target volume reduction in involved-node (IN) and advanced radiotherapy techniques, such as intensity-modulated radiotherapy (IMRT) and proton therapy-compared with involved-field (IF) and 3D radiotherapy (3D-RT)- can reduce high doses to organs at risk (OAR) and examines the issues that still remain open. RESULTS Although no comparison of all available techniques on identical patient datasets exists, clear patterns emerge. Advanced dose-calculation algorithms (e.g., convolution-superposition/Monte Carlo) should be used in mediastinal HL. INRT consistently reduces treated volumes when compared with IFRT with the exact amount depending on the INRT definition. The number of patients that might significantly benefit from highly conformal techniques such as IMRT over 3D-RT regarding high-dose exposure to organs at risk (OAR) is smaller with INRT. The impact of larger volumes treated with low doses in advanced techniques is unclear. The type of IMRT used (static/rotational) is of minor importance. All advanced photon techniques result in similar potential benefits and disadvantages, therefore only the degree-of-modulation should be chosen based on individual treatment goals. Treatment in deep inspiration breath hold is being evaluated. Protons theoretically provide both excellent high-dose conformality and reduced integral dose. CONCLUSION Further reduction of treated volumes most effectively reduces OAR dose, most likely without disadvantages if the excellent control rates achieved currently are maintained. For both IFRT and INRT, the benefits of advanced radiotherapy techniques depend on the individual patient/target geometry. Their use should therefore be decided case by case with comparative treatment planning.
Resumo:
Cloudification of the Centralized-Radio Access Network (C-RAN) in which signal processing runs on general purpose processors inside virtual machines has lately received significant attention. Due to short deadlines in the LTE Frequency Division Duplex access method, processing time fluctuations introduced by the virtualization process have a deep impact on C-RAN performance. This paper evaluates bottlenecks of the OpenAirInterface (OAI is an open-source software-based implementation of LTE) cloud performance, provides feasibility studies on C-RAN execution, and introduces a cloud architecture that significantly reduces the encountered execution problems. In typical cloud environments, the OAI processing time deadlines cannot be guaranteed. Our proposed cloud architecture shows good characteristics for the OAI cloud execution. As an example, in our setup more than 99.5% processed LTE subframes reach reasonable processing deadlines close to performance of a dedicated machine.
Resumo:
Activation of Rho family small G proteins is thought to be a critical event in breast cancer development and metastatic progression. Rho protein activation is stimulated by a family of enzymes known as guanine nucleotide exchange factors (Rho GEFs). The neuroepithelioma transforming gene 1 (Net1) is a Rho GEF specific for the RhoA subfamily that is overexpressed in primary breast tumors and breast cancer cell lines. Net1 isoform expression is also required for migration and invasion of breast cancer cells in vitro. These data indicate that Net1 may be a critical regulator of metastatic progression in breast cancer. Net1 activity is negatively regulated by sequestration in the nucleus, and relocalization of Net1 outside the nucleus is required to stimulate RhoA activation, actin cytoskeletal reorganization, and oncogenic transformation. However, regulatory mechanisms controlling the extranuclear localization of Net1 have not been identified. In this study, we have addressed the regulation of Net1A isoform localization by Rac1. Specifically, co-expression of constitutively active Rac1 with Net1A stimulates the relocalization of Net1A from the nucleus to the plasma membrane in breast cancer cells, and results in Net1A activation. Importantly, Net1A localization is also driven by endogenous Rac1 activity. Net1A relocalizes outside the nucleus in cells spreading on collagen, and when endogenous Rac1 expression was silenced by siRNA, Net1A remained nuclear in spreading cells. These data indicate that Rac1 controls the localization of the Net1A isoform and suggests a physiological role for Net1A in breast cancer cell adhesion and motility.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.
Resumo:
La idea de este proyecto es acercar la imagen de Libertad de Información y su conocida variante Open Source, donde cubriremos en detalle la multitud de puntos que abarca. Está dirigida a todos los usuarios que quieran conocer de primera mano cómo se inició la idea de Libertad Tecnológica hasta sus aplicaciones. No solo para aquellos que quieran emplearla, sino tambien para aquellos que la ya la usan y necesitan recursos para nuevas ideas. De esta forma, nos acercaremos tambien a la idea de libertad que en la tecnología está actualmente en debate. El contenido se estructura siguiendo las siguientes ramas: Historia, desde sus orígenes hasta el presente. Economía, ventajas y desventajas de esta libertad. Problemas legales en distintos niveles Noticias y actualizaciones de aplicaciones. Sociedad, entorno a la aceptación y rechazo por los usuarios, ademas de su influencia en la ética, educación e innovación. Aplicaciones, donde se incluirán la mayoría de las aplicaciones más conocidas en cada una de las ramas de Open Source. ---ABSTRACT---The topic finally chosen in the list of Professional Skills and Issues has been the Freedom of Information and its best known variant Open Source. We will try to cover in detail most of the points that includes history, economics, law, society and the various applications in which it have influenced. It allows all the public to see first-hand the term of Open Source, from its beginnings to applications. Not just for those who want to use it, but for those who already use it and want to find sources and new ideas. It will also get a step closer to the idea of Freedom of Information as currently being debated. The main branches are going to address: History, from its origins to the present. Economic, advantages and disadvantages of being free. Laws, problems in different continents at the legal level. News, latest in its various applications. Society, acceptance or rejection by the people, addition to the factors that influence as ethics, education, and arts innovation. Applications, where most try to include most current applications of each of the variants.
Resumo:
Within the European Union, member states are setting up official data catalogues as entry points to access PSI (Public Sector Information). In this context, it is important to describe the metadata of these data portals, i.e., of data catalogs, and allow for interoperability among them. To tackle these issues, the Government Linked Data Working Group developed DCAT (Data Catalog Vocabulary), an RDF vocabulary for describing the metadata of data catalogs. This topic report analyzes the current use of the DCAT vocabulary in several European data catalogs and proposes some recommendations to deal with an inconsistent use of the metadata across countries. The enrichment of such metadata vocabularies with multilingual descriptions, as well as an account for cultural divergences, is seen as a necessary step to guarantee interoperability and ensure wider adoption.
Resumo:
The Internet of Things makes use of a huge disparity of technologies at very different levels that help one to the other to accomplish goals that were previously regarded as unthinkable in terms of ubiquity or scalability. If the Internet of Things is expected to interconnect every day devices or appliances and enable communications between them, a broad range of new services, applications and products can be foreseen. For example, monitoring is a process where sensors have widespread use for measuring environmental parameters (temperature, light, chemical agents, etc.) but obtaining readings at the exact physical point they want to be obtained from, or about the exact wanted parameter can be a clumsy, time-consuming task that is not easily adaptable to new requirements. In order to tackle this challenge, a proposal on a system used to monitor any conceivable environment, which additionally is able to monitor the status of its own components and heal some of the most usual issues of a Wireless Sensor Network, is presented here in detail, covering all the layers that give it shape in terms of devices, communications or services.
Resumo:
The W3C Best Practises for Multilingual Linked Open Data community group was born one year ago during the last MLW workshop in Rome. Nowadays, it continues leading the effort of a numerous community towards acquiring a shared view of the issues caused by multilingualism on the Web of Data and their possible solutions. Despite our initial optimism, we found the task of identifying best practises for ML-LOD a difficult one, requiring a deep understanding of the Web of Data in its multilingual dimension and in its practical problems. In this talk we will review the progresses of the group so far, mainly in the identification and analysis of topics, use cases, and design patterns, as well as the future challenges.
Resumo:
By using reverse transcription-coupled PCR on rat anterior pituitary RNA, we isolated a 285-bp cDNA coding for a novel subtilisin/kexin-like protein convertase (PC), called rat (r) PC7. By screening rat spleen and PC12 cell lambda gt11 cDNA libraries, we obtained a composite 3.5-kb full-length cDNA sequence of rPC7. The open reading frame codes for a prepro-PC with a 36-amino acid signal peptide, a 104-amino acid prosegment ending with a cleavable RAKR sequence, and a 747-amino acid type I membrane-bound glycoprotein, representing the mature form of this serine proteinase. Phylogenetic analysis suggests that PC7 represents the most divergent enzyme of the mammalian convertase family and that it is the closest member to the yeast convertases krp and kexin. Northern blot analyses demonstrated a widespread expression with the richest source of rPC7 mRNA being the colon and lymphoid-associated tissues. In situ hybridization revealed a distinctive tissue distribution that sometimes overlaps with that of furin, suggesting that PC7 has widespread proteolytic functions. The gene for PC7 (Pcsk7) was mapped to mouse chromosome 9 by linkage analysis of an interspecific backcross DNA panel.
Open business intelligence: on the importance of data quality awareness in user-friendly data mining
Resumo:
Citizens demand more and more data for making decisions in their daily life. Therefore, mechanisms that allow citizens to understand and analyze linked open data (LOD) in a user-friendly manner are highly required. To this aim, the concept of Open Business Intelligence (OpenBI) is introduced in this position paper. OpenBI facilitates non-expert users to (i) analyze and visualize LOD, thus generating actionable information by means of reporting, OLAP analysis, dashboards or data mining; and to (ii) share the new acquired information as LOD to be reused by anyone. One of the most challenging issues of OpenBI is related to data mining, since non-experts (as citizens) need guidance during preprocessing and application of mining algorithms due to the complexity of the mining process and the low quality of the data sources. This is even worst when dealing with LOD, not only because of the different kind of links among data, but also because of its high dimensionality. As a consequence, in this position paper we advocate that data mining for OpenBI requires data quality-aware mechanisms for guiding non-expert users in obtaining and sharing the most reliable knowledge from the available LOD.
Resumo:
A wealth of open educational resources (OER) focused on green topics is currently available through a variety of sources, including learning portals, digital repositories and web sites. However, in most cases these resources are not easily accessible and retrievable, while additional issues further complicate this issue. This paper presents an overview of a number of portals hosting OER, as well as a number of “green” thematic portals that provide access to green OER. It also discusses the case of a new collection that aims to support and populate existing green collections and learning portals respectively, providing information on aspects such as quality assurance/collection and curation policies, workflow and tools for both the content and metadata records that apply to the collection. Two case studies of the integration of this new collection to existing learning portals are also presented.