936 resultados para Digital communication systems
Resumo:
Multiple physiological systems regulate the electric communication signal of the weakly electric gymnotiform fish, Brachyhypopomuspinnicaudatus. Fish were injected with neuroendocrine probes which identified pharmacologically relevant serotonin (5-HT) receptors similar to the mammalian 5-HT1AR and 5-HT2AR. Peptide hormones of the hypothalamic-pituitary-adrenal/interrenal axis also augment the electric waveform. These results indicate that the central serotonergic system interacts with the hypothalamic-pituitaryinterrenal system to regulate communication signals in this species. The same neuroendocrine probes were tested in females before and after introducing androgens to examine the relationship between sex steroid hormones, the serotonergic system, melanocortin peptides, and EOD modulations. Androgens caused an increase in female B. pinnicaudatus responsiveness to other pharmacological challenges, particularly to the melanocortin peptide adrenocorticotropic hormone (ACTH). A forced social challenge paradigm was administered to determine if androgens are responsible for controlling the signal modulations these fish exhibit when they encounter conspecifics. Males and females responded similarly to this social challenge construct, however introducing androgens caused implanted females to produce more exaggerated responses. These results confirm that androgens enhance an individual's capacity to produce an exaggerated response to challenge, however another unidentified factor appears to regulate sex-specific behaviors in this species. These results suggest that the rapid electric waveform modulations B. pinnicaudatus produces in response to conspecifics are situation-specific and controlled by activation of different serotonin receptor types and the subsequent effect on release of pituitary hormones.
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. This thesis describes a heterogeneous database system being developed at Highperformance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i.) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii.) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii.) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv.) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v.) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi.) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii.) a framework for intelligent computing and communication on the Internet applying the concepts of our work.
Resumo:
Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.
This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.
In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.
Resumo:
This paper focuses on the ties between social and digital inequalities among Argentinean youth. It uses a qualitative approach to explore different aspects of the everyday lives of adolescents, such as sociability, leisure time and family use of Information and Communication Technologies (ICTs), in order to assess the impact of the Connecting Equality Program (Programa Conectar Igualdad, PCI) on reducing digital inequalities and fostering social inclusion. What were the existing conditions of access for students and their families when the PCI was first implemented? What influence does the implementation of the PCI have on the individual, family and scholastic appropriation of ICTs? How does the use of computers and the Internet vary among youth? Has this large-scale incorporation of netbooks in schools, and especially homes and free time changed it in any way? Does the appropriation of ICTs through student participation in the PCI contribute to material and symbolic social inclusion? In order to answer these questions, we compare the processes of ICT appropriation among lower and middle class adolescents, focusing on the distinctive uses and meanings assigned to computers and the Internet by boys and girls in their daily lives. For this purpose we analyze data collected through semi-structured interviews in two schools in Greater La Plata, Argentina during 2012. The main findings show that in terms of access, skills and types of use, the implementation of the PCI has had a positive impact among lower class youth, guaranteeing access to their first computers and promoting the sharing of knowledge and digital skills with family members. Moreover, evidence of more diverse and intense use of ICTs among lower class students reveals the development of digital skills related to educational activities. Finally, in terms of sociability, having a personal netbook enables access to information and cultural goods which are very significant in generating ties and strengthening identities and social integration
Resumo:
This paper focuses on the ties between social and digital inequalities among Argentinean youth. It uses a qualitative approach to explore different aspects of the everyday lives of adolescents, such as sociability, leisure time and family use of Information and Communication Technologies (ICTs), in order to assess the impact of the Connecting Equality Program (Programa Conectar Igualdad, PCI) on reducing digital inequalities and fostering social inclusion. What were the existing conditions of access for students and their families when the PCI was first implemented? What influence does the implementation of the PCI have on the individual, family and scholastic appropriation of ICTs? How does the use of computers and the Internet vary among youth? Has this large-scale incorporation of netbooks in schools, and especially homes and free time changed it in any way? Does the appropriation of ICTs through student participation in the PCI contribute to material and symbolic social inclusion? In order to answer these questions, we compare the processes of ICT appropriation among lower and middle class adolescents, focusing on the distinctive uses and meanings assigned to computers and the Internet by boys and girls in their daily lives. For this purpose we analyze data collected through semi-structured interviews in two schools in Greater La Plata, Argentina during 2012. The main findings show that in terms of access, skills and types of use, the implementation of the PCI has had a positive impact among lower class youth, guaranteeing access to their first computers and promoting the sharing of knowledge and digital skills with family members. Moreover, evidence of more diverse and intense use of ICTs among lower class students reveals the development of digital skills related to educational activities. Finally, in terms of sociability, having a personal netbook enables access to information and cultural goods which are very significant in generating ties and strengthening identities and social integration
Resumo:
This paper focuses on the ties between social and digital inequalities among Argentinean youth. It uses a qualitative approach to explore different aspects of the everyday lives of adolescents, such as sociability, leisure time and family use of Information and Communication Technologies (ICTs), in order to assess the impact of the Connecting Equality Program (Programa Conectar Igualdad, PCI) on reducing digital inequalities and fostering social inclusion. What were the existing conditions of access for students and their families when the PCI was first implemented? What influence does the implementation of the PCI have on the individual, family and scholastic appropriation of ICTs? How does the use of computers and the Internet vary among youth? Has this large-scale incorporation of netbooks in schools, and especially homes and free time changed it in any way? Does the appropriation of ICTs through student participation in the PCI contribute to material and symbolic social inclusion? In order to answer these questions, we compare the processes of ICT appropriation among lower and middle class adolescents, focusing on the distinctive uses and meanings assigned to computers and the Internet by boys and girls in their daily lives. For this purpose we analyze data collected through semi-structured interviews in two schools in Greater La Plata, Argentina during 2012. The main findings show that in terms of access, skills and types of use, the implementation of the PCI has had a positive impact among lower class youth, guaranteeing access to their first computers and promoting the sharing of knowledge and digital skills with family members. Moreover, evidence of more diverse and intense use of ICTs among lower class students reveals the development of digital skills related to educational activities. Finally, in terms of sociability, having a personal netbook enables access to information and cultural goods which are very significant in generating ties and strengthening identities and social integration
Resumo:
This keynote presentation will report some of our research work and experience on the development and applications of relevant methods, models, systems and simulation techniques in support of different types and various levels of decision making for business, management and engineering. In particular, the following topics will be covered. Modelling, multi-agent-based simulation and analysis of the allocation management of carbon dioxide emission permits in China (Nanfeng Liu & Shuliang Li Agent-based simulation of the dynamic evolution of enterprise carbon assets (Yin Zeng & Shuliang Li) A framework & system for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps: a big data perspective (Jin Xu, Zheng Li, Shuliang Li & Yanyan Zhang) Open innovation: intelligent model, social media & complex adaptive system simulation (Shuliang Li & Jim Zheng Li) A framework, model and software prototype for modelling and simulation for deshopping behaviour and how companies respond (Shawkat Rahman & Shuliang Li) Integrating multiple agents, simulation, knowledge bases and fuzzy logic for international marketing decision making (Shuliang Li & Jim Zheng Li) A Web-based hybrid intelligent system for combined conventional, digital, mobile, social media and mobile marketing strategy formulation (Shuliang Li & Jim Zheng Li) A hybrid intelligent model for Web & social media dynamics, and evolutionary and adaptive branding (Shuliang Li) A hybrid paradigm for modelling, simulation and analysis of brand virality in social media (Shuliang Li & Jim Zheng Li) Network configuration management: attack paradigms and architectures for computer network survivability (Tero Karvinen & Shuliang Li)
Resumo:
During the development of a new treatment space for the UK emergency ambulance participatory observations with front-line clinicians revealed the need for an integrated patient monitoring, communication and navigation system. The research identified the different information touch-points and requirements through modes of use analysis, day-in-the-life study and simulation workshops with clinicians. Emergency scenario and role-play with paramedics identified 5 distinct ambulance modes of use. Information flow diagrams were created and checked by paramedics and digital User Interface (UI) wireframes were developed and evaluated by clinicians during clinical evaluations. Feedback from clinicians defined UI design specification further leading to a final design proposal. This research was a further development from the 2007 EPSRC funded “Smart Pods” project. The resulting interactive prototype was co-designed in collaboration with ambulance crews and provides a vision of what could be achieved by integrating well-proven IT technologies and protocols into a package relevant in the emergency medicine field. The system has been reviewed by over 40 ambulance crews and is part of a newly co-designed ambulance treatment space.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
Theories of sparse signal representation, wherein a signal is decomposed as the sum of a small number of constituent elements, play increasing roles in both mathematical signal processing and neuroscience. This happens despite the differences between signal models in the two domains. After reviewing preliminary material on sparse signal models, I use work on compressed sensing for the electron tomography of biological structures as a target for exploring the efficacy of sparse signal reconstruction in a challenging application domain. My research in this area addresses a topic of keen interest to the biological microscopy community, and has resulted in the development of tomographic reconstruction software which is competitive with the state of the art in its field. Moving from the linear signal domain into the nonlinear dynamics of neural encoding, I explain the sparse coding hypothesis in neuroscience and its relationship with olfaction in locusts. I implement a numerical ODE model of the activity of neural populations responsible for sparse odor coding in locusts as part of a project involving offset spiking in the Kenyon cells. I also explain the validation procedures we have devised to help assess the model's similarity to the biology. The thesis concludes with the development of a new, simplified model of locust olfactory network activity, which seeks with some success to explain statistical properties of the sparse coding processes carried out in the network.
Resumo:
Liquid crystals (LCs) have revolutionized the display and communication technologies. Doping of LCs with inorganic nanoparticles such as carbon nanotubes, gold nanoparticles and ferroelectric nanoparticles have garnered the interest of research community as they aid in improving the electro-optic performance. In this thesis, we examine a hybrid nanocomposite comprising of 5CB liquid crystal and block copolymer functionalized barium titanate ferroelectric nanoparticles. This hybrid system exhibits a giant soft-memory effect. Here, spontaneous polarization of ferroelectric nanoparticles couples synergistically with the radially aligned BCP chains to create nanoscopic domains that can be rotated electromechanically and locked in space even after the removal of the applied electric field. The resulting non-volatile memory is several times larger than the non-functionalized sample and provides an insight into the role of non-covalent polymer functionalization. We also present the latest results from the dielectric and spectroscopic study of field assisted alignment of gold nanorods.
Resumo:
Optical waveguides have shown promising results for use within printed circuit boards. These optical waveguides have higher bandwidth than traditional copper transmission systems and are immune to electromagnetic interference. Design parameters for these optical waveguides are needed to ensure an optimal link budget. Modeling and simulation methods are used to determine the optimal design parameters needed in designing the waveguides. As a result, optical structures necessary for incorporating optical waveguides into printed circuit boards are designed and optimized. Embedded siloxane polymer waveguides are investigated for their use in optical printed circuit boards. This material was chosen because it has low absorption, high temperature stability, and can be deposited using common processing techniques. Two sizes of waveguides are investigated, 50 $unit{mu m}$ multimode and 4 - 9 $unit{mu m}$ single mode waveguides. A beam propagation method is developed for simulating the multimode and single mode waveguide parameters. The attenuation of simulated multimode waveguides are able to match the attenuation of fabricated waveguides with a root mean square error of 0.192 dB. Using the same process as the multimode waveguides, parameters needed to ensure a low link loss are found for single mode waveguides including maximum size, minimum cladding thickness, minimum waveguide separation, and minimum bend radius. To couple light out-of-plane to a transmitter or receiver, a structure such as a vertical interconnect assembly (VIA) is required. For multimode waveguides the optimal placement of a total internal reflection mirror can be found without prior knowledge of the waveguide length. The optimal placement is found to be either 60 µm or 150 µm away from the end of the waveguide depending on which metric a designer wants to optimize the average output power, the output power variance, or the maximum possible power loss. For single mode waveguides a volume grating coupler is designed to couple light from a silicon waveguide to a polymer single mode waveguide. A focusing grating coupler is compared to a perpendicular grating coupler that is focused by a micro-molded lens. The focusing grating coupler had an optical loss of over -14 dB, while the grating coupler with a lens had an optical loss of -6.26 dB.
Resumo:
My dissertation emphasizes a cognitive account of multimodality that explicitly integrates experiential knowledge work into the rhetorical pedagogy that informs so many composition and technical communication programs. In these disciplines, multimodality is widely conceived in terms of what Gunther Kress calls “socialsemiotic” modes of communication shaped primarily by culture. In the cognitive and neurolinguistic theories of Vittorio Gallese and George Lakoff, however, multimodality is described as a key characteristic of our bodies’ sensory-motor systems which link perception to action and action to meaning, grounding all communicative acts in knowledge shaped through body-engaged experience. I argue that this “situated” account of cognition – which closely approximates Maurice Merleau-Ponty’s phenomenology of perception, a major framework for my study – has pedagogical precedence in the mimetic pedagogy that informed ancient Sophistic rhetorical training, and I reveal that training’s multimodal dimensions through a phenomenological exegesis of the concept mimesis. Plato’s denigration of the mimetic tradition and his elevation of conceptual contemplation through reason, out of which developed the classic Cartesian separation of mind from body, resulted in a general degradation of experiential knowledge in Western education. But with the recent introduction into college classrooms of digital technologies and multimedia communication tools, renewed emphasis is being placed on the “hands-on” nature of inventive and productive praxis, necessitating a revision of methods of instruction and assessment that have traditionally privileged the acquisition of conceptual over experiential knowledge. The model of multimodality I construct from Merleau-Ponty’s phenomenology, ancient Sophistic rhetorical pedagogy, and current neuroscientific accounts of situated cognition insists on recognizing the significant role knowledges we acquire experientially play in our reading and writing, speaking and listening, discerning and designing practices.