927 resultados para emergent protocols
Resumo:
Within academic institutions, writing centers are uniquely situated, socially rich sites for exploring learning and literacy. I examine the work of the Michigan Tech Writing Center's UN 1002 World Cultures study teams primarily because student participants and Writing Center coaches are actively engaged in structuring their own learning and meaning-making processes. My research reveals that learning is closely linked to identity formation and leading the teams is an important component of the coaches' educational experiences. I argue that supporting this type of learning requires an expanded understanding of literacy and significant changes to how learning environments are conceptualized and developed. This ethnographic study draws on data collected from recordings and observations of one semester of team sessions, my own experiences as a team coach and UN 1002 teaching assistant, and interviews with Center coaches prior to their graduation. I argue that traditional forms of assessment and analysis emerging from individualized instruction models of learning cannot fully account for the dense configurations of social interactions identified in the Center's program. Instead, I view the Center as an open system and employ social theories of learning and literacy to uncover how the negotiation of meaning in one context influences and is influenced by structures and interactions within as well as beyond its boundaries. I focus on the program design, its enaction in practice, and how engagement in this type of writing center work influences coaches' learning trajectories. I conclude that, viewed as participation in a community of practice, the learning theory informing the program design supports identity formation —a key aspect of learning as argued by Etienne Wenger (1998). The findings of this study challenge misconceptions of peer learning both in writing centers and higher education that relegate peer tutoring to the role of support for individualized models of learning. Instead, this dissertation calls for consideration of new designs that incorporate peer learning as an integral component. Designing learning contexts that cultivate and support the formation of new identities is complex, involves a flexible and opportunistic design structure, and requires the availability of multiple forms of participation and connections across contexts.
Resumo:
This paper analyzes the implementation of new technologies in network industries through the development of a suitable regulatory scheme. The analysis focuses on Smart Grid (SG) technologies which, among others benefits, could save operational costs and reduce the need for further conventional investments in the grid. In spite of the benefits that may result from their implementation, the adoption of SGs by network operators can be hampered by the uncertainties surrounding actual performances. A decision model has been developed to assess the firms' incentives to invest in "smart" technologies under different regulatory schemes. The model also enables testing the impact of uncertainties on the reduction of operational costs, and of conventional investments. Under certain circumstances, it may be justified to support the development and early deployment of emerging innovations that have a high potential to ameliorate the efficiency of the electricity system, but whose adoption faces many uncertainties.
Resumo:
Thesis (Master, Environmental Studies) -- Queen's University, 2016-09-09 11:52:31.446
Resumo:
The past few decades have seen major impacts of different pandemics and mass casualty events on health resource use in terms of rising health cost and increased mortality.
Resumo:
The conservation and valorisation of cultural heritage is of fundamental importance for our society, since it is witness to the legacies of human societies. In the case of metallic artefacts, because corrosion is a never-ending problem, the correct strategies for their cleaning and preservation must be chosen. Thus, the aim of this project was the development of protocols for cleaning archaeological copper artefacts by laser and plasma cleaning, since they allow the treatment of artefacts in a controlled and selective manner. Additionally, electrochemical characterisation of the artificial patinas was performed in order to obtain information on the protective properties of the corrosion layers. Reference copper samples with different artificial corrosion layers were used to evaluate the tested parameters. Laser cleaning tests resulted in partial removal of the corrosion products, but the lasermaterial interactions resulted in melting of the desired corrosion layers. The main obstacle for this process is that the materials that must be preserved show lower ablation thresholds than the undesired layers, which makes the proper elimination of dangerous corrosion products very difficult without damaging the artefacts. Different protocols should be developed for different patinas, and real artefacts should be characterised previous to any treatment to determine the best course of action. Low pressure hydrogen plasma cleaning treatments were performed on two kinds of patinas. In both cases the corrosion layers were partially removed. The total removal of the undesired corrosion products can probably be achieved by increasing the treatment time or applied power, or increasing the hydrogen pressure. Since the process is non-invasive and does not modify the bulk material, modifying the cleaning parameters is easy. EIS measurements show that, for the artificial patinas, the impedance increases while the patina is growing on the surface and then drops, probably due to diffusion reactions and a slow dissolution of copper. It appears from these results that the dissolution of copper is heavily influenced by diffusion phenomena and the corrosion product film porosity. Both techniques show good results for cleaning, as long as the proper parameters are used. These depend on the nature of the artefact and the corrosion layers that are found on its surface.
Resumo:
The objective of this study, was to evaluate the exogenous FSH dose effect on gonadotrophic treatment over ewes ovulatory follicle dynamics.
Resumo:
Considering the influence of herbicides in the metabolism of the carotenoids in corn, the objective of the present study was to evaluate the effect of herbicides and genotype on carotenoids concentration. The green corn hybrids BRS 1030 and P30F53 were subjected to a post-emergent herbicides application at 20 and 30 days after emergence. Carotenoids were extracted from corn grains and analyzed to quantify ?- and ?-carotene, lutein, zeaxanthin, ?-cryptoxanthin, total carotenoids (TC), and total of vitamin A carotenoids precursors (proVA). The application of foramsulfuron + iodosulfuron-methyl-sodium (40 + 2.6 g ha-1), nicosulfuron (20 g ha-1), mesotrione (120 g ha-1) and tembotrione (80 g ha-1 and 100 g ha-1) promoted higher concentration of carotenoids in fresh green corn. Lutein, zeaxanthin, ?-cryptoxanthin, ?-carotene, ?-carotene, proVA carotenoids, and TC concentration increased after foramsulfuron + iodosulfuron-methyl-sodium in late application (V5 to V6), nicosulfuron in both applications, mesotrione applied post-initial (V3 to V4), tembotrione (100 g ha-1) in both applications and tembotrione (80 g ha-1) in late post-application, at least for one hybrid. The content of carotenoids in the green corn kernels differed between ?BRS 1030? and ?P30F53?. Our results suggest a possibility of significant increase of carotenoids in green corn kernels through the handling of corn production with post-emergent herbicides.
Resumo:
The objective of this thesis is the analysis and the study of the various access techniques for vehicular communications, in particular of the C-V2X and WAVE protocols. The simulator used to study the performance of the two protocols is called LTEV2Vsim and was developed by the CNI IEIIT for the study of V2V (Vehicle-to-Vehicle) communications. The changes I made allowed me to study the I2V (Infrastructure-to-Vehicle) scenario in highway areas and, with the results obtained, I made a comparison between the two protocols in the case of high vehicular density and low vehicular density, putting in relation to the PRR (packet reception ratio) and the cell size (RAW, awareness range). The final comparison allows to fully understand the possible performances of the two protocols and highlights the need for a protocol that allows to reach the minimum necessary requirements.
Resumo:
We investigated how participants associated with each other and developed community in a Massive Open Online Course (MOOC) about Rhizomatic Learning (Rhizo14). We compared learner experiences in two social networking sites (SNSs), Facebook and Twitter. Our combination of thematic analysis of qualitative survey data with analysis of participant observation, activity data, archives and visualisation of SNS data enabled us to reach a deeper understanding of participant perspectives and explore SNS use. Community was present in the course title and understood differently by participants. In the absence of explanation or discussion about community early in the MOOC, a controversy between participants about course expectations emerged that created oppositional discourse. Fall off in activity in MOOCs is common and was evident in Rhizo14. As the course progressed, fewer participants were active in Facebook and some participants reported feelings of exclusion. Despite this, activity in Facebook increased overall. The top 10 most active participants were responsible for 47% of total activity. In the Rhizo14 MOOC, both community and curriculum were expected to emerge within the course. We suggest that there are tensions and even contradictions between ‘Community Is the Curriculum’ and Deleuze and Guattari's principles of the rhizome, mainly focussed on an absence of heterogeneity. These tensions may be exacerbated by SNSs that use algorithmic streams. We propose the use of networking approaches that enable negotiation and exchange to encourage heterogeneity rather than emergent definition of community.
Resumo:
In the central nervous system, iron in several proteins is involved in many important processes: oxygen transportation, oxidative phosphorylation, mitochondrial respiration, myelin production, the synthesis and metabolism of neurotransmitters. Abnormal iron homoeostasis can induce cellular damage through hydroxyl radical production, which can cause the oxidation, modification of lipids, proteins, carbohydrates, and DNA, lead to neurotoxicity. Moreover increased levels of iron are harmful and iron accumulations are typical hallmarks of brain ageing and several neurodegenerative disorders particularly PD. Numerous studies on post mortem tissue report on an increased amount of total iron in the substantia nigra in patients with PD also supported by large body of in vivo findings from Magnetic Resonance Imaging (MRI) studies. The importance and approaches for in vivo brain iron assessment using multiparametric MRI is increased over last years. Quantitative MRI may provide useful biomarkers for brain integrity assessment in iron-related neurodegeneration. Particularly, a prominent change in iron- sensitive T2* MRI contrast within the sub areas of the SN overlapping with nigrosome 1 were shown to be a hallmark of Parkinson's Disease with high diagnostic accuracy. Moreover, differential diagnosis between Parkinson's Disease (PD) and atypical parkinsonian syndromes (APS) remains challenging, mainly in the early phases of the disease. Advanced brain MR imaging enables to detect the pathological changes of nigral and extranigral structures at the onset of clinical manifestations and during the course of the disease. The Nigrosome-1 (N1) is a substructure of the healthy Substantia Nigra pars compacta enriched by dopaminergic neurons; their loss in Parkinson’s disease and atypical parkinsonian syndromes is related to the iron accumulation. N1 changes are supportive MR biomarkers for diagnosis of these neurodegenerative disorders, but its detection is hard with conventional sequences, also using high field (3T) scanner. Quantitative susceptibility mapping (QSM), an iron-sensitive technique, enables the direct detection of Neurodegeneration
Resumo:
Cleaning is one of the most important and delicate procedures that are part of the restoration process. When developing new systems, it is fundamental to consider its selectivity towards the layer to-be-removed, non-invasiveness towards the one to-be-preserved, its sustainability and non-toxicity. Besides assessing its efficacy, it is important to understand its mechanism by analytical protocols that strike a balance between cost, practicality, and reliable interpretation of results. In this thesis, the development of cleaning systems based on the coupling of electrospun fabrics (ES) and greener organic solvents is proposed. Electrospinning is a versatile technique that allows the production of micro/nanostructured non-woven mats, which have already been used as absorbents in various scientific fields, but to date, not in the restoration field. The systems produced proved to be effective for the removal of dammar varnish from paintings, where the ES not only act as solvent-binding agents but also as adsorbents towards the partially solubilised varnish due to capillary rise, thus enabling a one-step procedure. They have also been successfully applied for the removal of spray varnish from marble substrates and wall paintings. Due to the materials' complexity, the procedure had to be adapted case-by-case and mechanical action was still necessary. According to the spinning solution, three types of ES mats have been produced: polyamide 6,6, pullulan and pullulan with melanin nanoparticles. The latter, under irradiation, allows for a localised temperature increase accelerating and facilitating the removal of less soluble layers (e.g. reticulated alkyd-based paints). All the systems produced, and the mock-ups used were extensively characterised using multi-analytical protocols. Finally, a monitoring protocol and image treatment based on photoluminescence macro-imaging is proposed. This set-up allowed the study of the removal mechanism of dammar varnish and semi-quantify its residues. These initial results form the basis for optimising the acquisition set-up and data processing.
Resumo:
In recent years, we have witnessed the growth of the Internet of Things paradigm, with its increased pervasiveness in our everyday lives. The possible applications are diverse: from a smartwatch able to measure heartbeat and communicate it to the cloud, to the device that triggers an event when we approach an exhibit in a museum. Present in many of these applications is the Proximity Detection task: for instance the heartbeat could be measured only when the wearer is near to a well defined location for medical purposes or the touristic attraction must be triggered only if someone is very close to it. Indeed, the ability of an IoT device to sense the presence of other devices nearby and calculate the distance to them can be considered the cornerstone of various applications, motivating research on this fundamental topic. The energy constraints of the IoT devices are often in contrast with the needs of continuous operations to sense the environment and to achieve high accurate distance measurements from the neighbors, thus making the design of Proximity Detection protocols a challenging task.
Resumo:
In next generation Internet-of-Things, the overhead introduced by grant-based multiple access protocols may engulf the access network as a consequence of the proliferation of connected devices. Grant-free access protocols are therefore gaining an increasing interest to support massive multiple access. In addition to scalability requirements, new demands have emerged for massive multiple access, including latency and reliability. The challenges envisaged for future wireless communication networks, particularly in the context of massive access, include: i) a very large population size of low power devices transmitting short packets; ii) an ever-increasing scalability requirement; iii) a mild fixed maximum latency requirement; iv) a non-trivial requirement on reliability. To this aim, we suggest the joint utilization of grant-free access protocols, massive MIMO at the base station side, framed schemes to let the contention start and end within a frame, and succesive interference cancellation techniques at the base station side. In essence, this approach is encapsulated in the concept of coded random access with massive MIMO processing. These schemes can be explored from various angles, spanning the protocol stack from the physical (PHY) to the medium access control (MAC) layer. In this thesis, we delve into both of these layers, examining topics ranging from symbol-level signal processing to succesive interference cancellation-based scheduling strategies. In parallel with proposing new schemes, our work includes a theoretical analysis aimed at providing valuable system design guidelines. As a main theoretical outcome, we propose a novel joint PHY and MAC layer design based on density evolution on sparse graphs.
Resumo:
Radiation dose in x-ray computed tomography (CT) has become a topic of great interest due to the increasing number of CT examinations performed worldwide. In fact, CT scans are responsible of significant doses delivered to the patients, much larger than the doses due to the most common radiographic procedures. This thesis work, carried out at the Laboratory of Medical Technology (LTM) of the Rizzoli Orthopaedic Institute (IOR, Bologna), focuses on two primary objectives: the dosimetric characterization of the tomograph present at the IOR and the optimization of the clinical protocol for hip arthroplasty. In particular, after having verified the reliability of the dose estimates provided by the system, we compared the estimates of the doses delivered to 10 patients undergoing CT examination for the pre-operative planning of hip replacement with the Diagnostic Reference Level (DRL) for an osseous pelvis examination. Out of 10 patients considered, only for 3 of them the doses were lower than the DRL. Therefore, the necessity to optimize the clinical protocol emerged. This optimization was investigated using a human femur from a cadaver. Quantitative analysis and comparison of 3D reconstructions were made, after having performed manual segmentation of the femur from different CT acquisitions. Dosimetric simulations of the CT acquisitions on the femur were also made and associated to the accuracy of the 3D reconstructions, to analyse the optimal combination of CT acquisition parameters. The study showed that protocol optimization both in terms of Hausdorff distance and in terms of effective dose (ED) to the patient may be realized simply by modifying the value of the pitch in the protocol, by choosing between 0.98 and 1.37.
Resumo:
Protocols for the generation of dendritic cells (DCs) using serum as a supplementation of culture media leads to reactions due to animal proteins and disease transmissions. Several types of serum-free media (SFM), based on good manufacture practices (GMP), have recently been used and seem to be a viable option. The aim of this study was to evaluate the results of the differentiation, maturation, and function of DCs from Acute Myeloid Leukemia patients (AML), generated in SFM and medium supplemented with autologous serum (AS). DCs were analyzed by phenotype characteristics, viability, and functionality. The results showed the possibility of generating viable DCs in all the conditions tested. In patients, the X-VIVO 15 medium was more efficient than the other media tested in the generation of DCs producing IL-12p70 (p=0.05). Moreover, the presence of AS led to a significant increase of IL-10 by DCs as compared with CellGro (p=0.05) and X-Vivo15 (p=0.05) media, both in patients and donors. We concluded that SFM was efficient in the production of DCs for immunotherapy in AML patients. However, the use of AS appears to interfere with the functional capacity of the generated DCs.