960 resultados para Multiple Sources


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.

The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.

First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.

Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.

My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.

In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The amount and quality of available biomass is a key factor for the sustainable livestock industry and agricultural management related decision making. Globally 31.5% of land cover is grassland while 80% of Ireland’s agricultural land is grassland. In Ireland, grasslands are intensively managed and provide the cheapest feed source for animals. This dissertation presents a detailed state of the art review of satellite remote sensing of grasslands, and the potential application of optical (Moderate–resolution Imaging Spectroradiometer (MODIS)) and radar (TerraSAR-X) time series imagery to estimate the grassland biomass at two study sites (Moorepark and Grange) in the Republic of Ireland using both statistical and state of the art machine learning algorithms. High quality weather data available from the on-site weather station was also used to calculate the Growing Degree Days (GDD) for Grange to determine the impact of ancillary data on biomass estimation. In situ and satellite data covering 12 years for the Moorepark and 6 years for the Grange study sites were used to predict grassland biomass using multiple linear regression, Neuro Fuzzy Inference Systems (ANFIS) models. The results demonstrate that a dense (8-day composite) MODIS image time series, along with high quality in situ data, can be used to retrieve grassland biomass with high performance (R2 = 0:86; p < 0:05, RMSE = 11.07 for Moorepark). The model for Grange was modified to evaluate the synergistic use of vegetation indices derived from remote sensing time series and accumulated GDD information. As GDD is strongly linked to the plant development, or phonological stage, an improvement in biomass estimation would be expected. It was observed that using the ANFIS model the biomass estimation accuracy increased from R2 = 0:76 (p < 0:05) to R2 = 0:81 (p < 0:05) and the root mean square error was reduced by 2.72%. The work on the application of optical remote sensing was further developed using a TerraSAR-X Staring Spotlight mode time series over the Moorepark study site to explore the extent to which very high resolution Synthetic Aperture Radar (SAR) data of interferometrically coherent paddocks can be exploited to retrieve grassland biophysical parameters. After filtering out the non-coherent plots it is demonstrated that interferometric coherence can be used to retrieve grassland biophysical parameters (i. e., height, biomass), and that it is possible to detect changes due to the grass growth, and grazing and mowing events, when the temporal baseline is short (11 days). However, it not possible to automatically uniquely identify the cause of these changes based only on the SAR backscatter and coherence, due to the ambiguity caused by tall grass laid down due to the wind. Overall, the work presented in this dissertation has demonstrated the potential of dense remote sensing and weather data time series to predict grassland biomass using machine-learning algorithms, where high quality ground data were used for training. At present a major limitation for national scale biomass retrieval is the lack of spatial and temporal ground samples, which can be partially resolved by minor modifications in the existing PastureBaseIreland database by adding the location and extent ofeach grassland paddock in the database. As far as remote sensing data requirements are concerned, MODIS is useful for large scale evaluation but due to its coarse resolution it is not possible to detect the variations within the fields and between the fields at the farm scale. However, this issue will be resolved in terms of spatial resolution by the Sentinel-2 mission, and when both satellites (Sentinel-2A and Sentinel-2B) are operational the revisit time will reduce to 5 days, which together with Landsat-8, should enable sufficient cloud-free data for operational biomass estimation at a national scale. The Synthetic Aperture Radar Interferometry (InSAR) approach is feasible if there are enough coherent interferometric pairs available, however this is difficult to achieve due to the temporal decorrelation of the signal. For repeat-pass InSAR over a vegetated area even an 11 days temporal baseline is too large. In order to achieve better coherence a very high resolution is required at the cost of spatial coverage, which limits its scope for use in an operational context at a national scale. Future InSAR missions with pair acquisition in Tandem mode will minimize the temporal decorrelation over vegetation areas for more focused studies. The proposed approach complements the current paradigm of Big Data in Earth Observation, and illustrates the feasibility of integrating data from multiple sources. In future, this framework can be used to build an operational decision support system for retrieval of grassland biophysical parameters based on data from long term planned optical missions (e. g., Landsat, Sentinel) that will ensure the continuity of data acquisition. Similarly, Spanish X-band PAZ and TerraSAR-X2 missions will ensure the continuity of TerraSAR-X and COSMO-SkyMed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The magnetic microparticle and nanoparticle inventories of marine sediments from equatorial Atlantic sites were investigated by scanning and transmission electron microscopy to classify all present detrital and authigenic magnetic mineral species and to investigate their regional distribution, origin, transport, and preservation. This information is used to establish source-to-sink relations and to constrain environmental magnetic proxy interpretations for this area. Magnetic extracts were prepared from sediments of three supralysoclinal open ocean gravity cores located at the Ceará Rise (GeoB 1523-1; 3°49.9'N/41°37.3'W), the Mid-Atlantic Ridge (GeoB 4313-2; 4°02.8'N/33°26.3'W), and the Sierra Leone Rise (GeoB 2910-1; 4°50.7'N/21°03.2'W). Sediments from two depths corresponding to marine isotope stages 4 and 5.5 were processed. This selection represents glacial and interglacial conditions of sedimentation for the western, central, and eastern equatorial Atlantic and avoids interferences from subsurface and anoxic processes. Crystallographic, elemental, morphological, and granulometric data of more than 2000 magnetic particles were collected by scanning and transmission electron microscopy. On basis of these properties, nine particle classes could be defined: detrital magnetite, titanomagnetite (fragmental and euhedral), titanomagnetite-hemoilmentite intergrowths, silicates with magnetic inclusions, microcrystalline hematite, magnetite spherules, bacterial magnetite, goethite needles, and nanoparticle clusters. Each class can be associated with fluvial, eolian, subaeric, and submarine volcanic, biogenic, or chemogenic sources. Large-scale sedimentation patterns are delineated as well: detrital magnetite is typical of Amazon discharge, fragmental titanomagnetite is a submarine weathering product of mid-ocean ridge basalts, and titanomagnetite-hemoilmenite intergrowths are common magnetic particles in West African dust. This clear regionalization underlines that magnetic petrology is an excellent indicator of source-to-sink relations. Hematite encrustations, magnetic spherules, and nanoparticle clusters were found at all investigated sites, while bacterial magnetite and authigenic hematite were only detected at the more oxic western site. At the eastern site, surface pits and crevices were seen on the crystal faces indicating subtle early diagenetic reductive dissolution. It was observed that paleoclimatic signatures of magnetogranulometric parameters such as the ratio of anhysteretic and isothermal remanent magnetizations can be formed either by mixing of multiple sources with separate, relatively narrow grain size ranges (western site) or by variable sorting of a single source with a broad grain size distribution (eastern site). Hematite, goethite, and possibly ferrihydrite nanoparticles occur in all sediment cores studied and have either high-coercive or superparamagnetic properties depending on their partly ultrafine grain sizes. These two magnetic fractions are generally discussed as separate fractions, but we suggest that they could actually be genetically linked.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider a multipair relay channel, where multiple sources communicate with multiple destinations with the help of a full-duplex (FD) relay station (RS). All sources and destinations have a single antenna, while the RS is equipped with massive arrays. We assume that the RS estimates the channels by using training sequences transmitted from sources and destinations. Then, it uses maximum-ratio combining/maximum-ratio transmission (MRC/MRT) to process the signals. To significantly reduce the loop interference (LI) effect, we propose two massive MIMO processing techniques: i) using a massive receive antenna array; or ii) using a massive transmit antenna array together with very low transmit power at the RS. We derive an exact achievable rate in closed-form and evaluate the system spectral efficiency. We show that, by doubling the number of antennas at the RS, the transmit power of each source and of the RS can be reduced by 1.5 dB if the pilot power is equal to the signal power and by 3 dB if the pilot power is kept fixed, while maintaining a given quality-of-service. Furthermore, we compare FD and half-duplex (HD) modes and show that FD improves significantly the performance when the LI level is low.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In urban areas, interchange spacing and the adequacy of design for weaving, merge, and diverge areas can significantly influence available capacity. Traffic microsimulation tools allow detailed analyses of these critical areas in complex locations that often yield results that differ from the generalized approach of the Highway Capacity Manual. In order to obtain valid results, various inputs should be calibrated to local conditions. This project investigated basic calibration factors for the simulation of traffic conditions within an urban freeway merge/diverge environment. By collecting and analyzing urban freeway traffic data from multiple sources, specific Iowa-based calibration factors for use in VISSIM were developed. In particular, a repeatable methodology for collecting standstill distance and headway/time gap data on urban freeways was applied to locations throughout the state of Iowa. This collection process relies on the manual processing of video for standstill distances and individual vehicle data from radar detectors to measure the headways/time gaps. By comparing the data collected from different locations, it was found that standstill distances vary by location and lead-follow vehicle types. Headways and time gaps were found to be consistent within the same driver population and across different driver populations when the conditions were similar. Both standstill distance and headway/time gap were found to follow fairly dispersed and skewed distributions. Therefore, it is recommended that microsimulation models be modified to include the option for standstill distance and headway/time gap to follow distributions as well as be set separately for different vehicle classes. In addition, for the driving behavior parameters that cannot be easily collected, a sensitivity analysis was conducted to examine the impact of these parameters on the capacity of the facility. The sensitivity analysis results can be used as a reference to manually adjust parameters to match the simulation results to the observed traffic conditions. A well-calibrated microsimulation model can enable a higher level of fidelity in modeling traffic behavior and serve to improve decision making in balancing need with investment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Poster presentation at the University of Maryland Libraries Research & Innovative Practice Forum on June 8, 2016. The poster proposes that the UMD Libraries should evaluate adoption of Bento Box Discovery for improved user search experience.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tämän kandidaatintyön tavoitteena oli esitellä orbitaali-TIG-hitsauksen käyttämistä putkien päittäishitsaamiseen. Työssä esitellään Suomesta saatavia orbitaali-TIG-laitteita ja TIG-hitsausprosessi pääperiaatteiltaan sekä prosessin käyttämistä orbitaalihitsaukseen. Myös orbitaali-TIG-hitsauksen tuottavuuteen ja laatuun liittyviä asioita käydään läpi. Suomesta saatavilla olevista laitteista valittiin toiminnallisten mittojen kannalta sopivimmat kandidaatintyötä varten suunniteltuun ripaputkilämmönvaihtimen hitsaukseen. Työ on pääasiallisesti tehty kirjallisuustutkimuksena käyttäen apuna orbitaalilaitteistojen valmistajien ja jälleenmyyjien haastatteluja. Kirjalliset lähteet koostuvat kotimaisesta ja kansainvälisestä hitsauksen alan kirjallisuudesta ja teksti on pyritty sitomaan yhteen käyttäen useaa eri lähdettä. Laitekohtaiset tiedot saatiin laitevalmistajien tuotetiedotteista ja sähköpostihaastatteluna laitteiden jälleenmyyjiltä ja laitevalmistajilta. Suomessa orbitaali-TIG-laitteita maahantuo ja jälleenmyy Masino Welding Oy ja Suomen Teknohaus Oy. Suomessa laitteita valmistaa Kemppi Oy, jonka orbitaalilaitteilla on useita jälleenmyyjiä. Masino Welding Oy myy saksalaisia Orbitalum GmbH:n laitteita ja Suomen Teknohaus Oy ranskalaisia Polysoude S.A.S.:n laitteita. Näiden laitevalmistajien joukosta suunnitellun ripaputkilämmönvaihtimen hitsaukseen soveltuu Orbitalumilta ja Polysoudelta yhdet sekä Kempiltä kaksi umpipihtimallista orbitaalihitsauspäätä. Orbitaali-TIG-hitsauslaitteissa virtalähteet ovat kehittyneet eniten vuosien aikana ja eri laitevalmistajien laitteistojen erot ovat pääasiassa virtalähteisiin liittyviä. Ripaputkilämmönvaihtimen hitsaamiseen sopivin hitsauspäämalli on umpipihti, sillä lämmönvaihtimen päädyt ovat hyvin ahtaita ja umpipihdit ovat orbitaalihitsauspäistä kaikkein kompakteimmat. Suomessa orbitaali-TIG-laitteita ei ole kauheasti tarjolla ja laitteiden markkinointi on jossain määrin kannattamatonta. Orbitaalihitsaus on kuitenkin varteenotettava vaihtoehto TIG-käsinhitsaukselle, jos hitsataan paljon samankaltaisia hitsejä.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this Master’s Thesis was to study the suitability of transportation of liquid wastes to the portfolio of the case company. After the preliminary study the waste types were narrowed down to waste oil and oily waste from ports. The thesis was executed by generating a business plan. The qualitative research of this Master’s Thesis was executed as a case study by collecting information from multiple sources. The business plan was carried out by first familiarizing oneself with literature related to business planning which was then used as a base for the interview of the customer and interviews of the personnel of the case company. Additionally, internet sources and informal conversational interviews with the personnel of the case company were used and these interviews took place during the preliminary study and this thesis. The results of this thesis describe the requirements for the case company that must be met to be able to start operations. Import of waste oil fits perfectly to the portfolio of the case company and it doesn’t require any big investments. Success of the import of waste oil is affected by price of crude oil, exchange rate of ruble and legislation among others. Transportation of oily waste from ports, in turn, is not a core competence of the case company so more actions are required to start operating such as subcontracting with a waste management company.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Students often receive instruction from specialists, professionals other than their general educators, such as special educators, reading specialists, and ESOL (English Speakers of Other Languages) teachers. The purpose of this study was to examine how general educators and specialists develop collaborative relationships over time within the context of receiving professional development. While collaboration is considered essential to increasing student achievement, improving teachers’ practice, and creating comprehensive school reform, collaborative partnerships take time to develop and require multiple sources of support. Additionally, both practitioners and researchers often conflate collaboration with structural reforms such as co-teaching. This study used a retrospective single case study with a grounded theory approach to analysis. Data were collected through semi-structured interviews with thirteen teachers and an administrator after three workshops were conducted throughout the school year. The theory, Cultivating Interprofessional Collaboration, describes how interprofessional relationships grow as teachers engage in a cycle of learning, constructing partnership, and reflecting. As relationships deepen some partners experience a seamless dimension to their work. A variety of intrapersonal, interpersonal, and external factors work in concert to promote this growth, which is strengthened through professional development. In this theory, professional development provides a common ground for strengthening relationships, knowledge about the collaborative process, and a reflective space to create new collaborative practices. Effective collaborative practice can lead to aligned instruction and teachers’ own professional growth. This study has implications for school interventions, professional development, and future research on collaboration in schools.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Libraries since their inception 4000 years ago have been in a process of constant change. Although, changes were in slow motion for centuries, in the last decades, academic libraries have been continuously striving to adapt their services to the ever-changing user needs of students and academic staff. In addition, e-content revolution, technological advances, and ever-shrinking budgets have obliged libraries to efficiently allocate their limited resources among collection and services. Unfortunately, this resource allocation is a complex process due to the diversity of data sources and formats required to be analyzed prior to decision-making, as well as the lack of efficient integration methods. The main purpose of this study is to develop an integrated model that supports libraries in making optimal budgeting and resource allocation decisions among their services and collection by means of a holistic analysis. To this end, a combination of several methodologies and structured approaches is conducted. Firstly, a holistic structure and the required toolset to holistically assess academic libraries are proposed to collect and organize the data from an economic point of view. A four-pronged theoretical framework is used in which the library system and collection are analyzed from the perspective of users and internal stakeholders. The first quadrant corresponds to the internal perspective of the library system that is to analyze the library performance, and costs incurred and resources consumed by library services. The second quadrant evaluates the external perspective of the library system; user’s perception about services quality is judged in this quadrant. The third quadrant analyses the external perspective of the library collection that is to evaluate the impact of the current library collection on its users. Eventually, the fourth quadrant evaluates the internal perspective of the library collection; the usage patterns followed to manipulate the library collection are analyzed. With a complete framework for data collection, these data coming from multiple sources and therefore with different formats, need to be integrated and stored in an adequate scheme for decision support. A data warehousing approach is secondly designed and implemented to integrate, process, and store the holistic-based collected data. Ultimately, strategic data stored in the data warehouse are analyzed and implemented for different purposes including the following: 1) Data visualization and reporting is proposed to allow library managers to publish library indicators in a simple and quick manner by using online reporting tools. 2) Sophisticated data analysis is recommended through the use of data mining tools; three data mining techniques are examined in this research study: regression, clustering and classification. These data mining techniques have been applied to the case study in the following manner: predicting the future investment in library development; finding clusters of users that share common interests and similar profiles, but belong to different faculties; and predicting library factors that affect student academic performance by analyzing possible correlations of library usage and academic performance. 3) Input for optimization models, early experiences of developing an optimal resource allocation model to distribute resources among the different processes of a library system are documented in this study. Specifically, the problem of allocating funds for digital collection among divisions of an academic library is addressed. An optimization model for the problem is defined with the objective of maximizing the usage of the digital collection over-all library divisions subject to a single collection budget. By proposing this holistic approach, the research study contributes to knowledge by providing an integrated solution to assist library managers to make economic decisions based on an “as realistic as possible” perspective of the library situation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Natural radioactive tracer-based assessments of basin-scale submarine groundwater discharge (SGD) are well developed. However, SGD takes place in different modes and the flow and discharge mechanisms involved occur over a wide range of spatial and temporal scales. Quantifying SGD while discriminating its source functions therefore remains a major challenge. However, correctly identifying both the fluid source and composition is critical. When multiple sources of the tracer of interest are present, failure to adequately discriminate between them leads to inaccurate attribution and the resulting uncertainties will affect the reliability of SGD solute loading estimates. This lack of reliability then extends to the closure of local biogeochemical budgets, confusing measures aiming to mitigate pollution. Here, we report a multi-tracer study to identify the sources of SGD, distinguish its component parts and elucidate the mechanisms of their dispersion throughout the Ria Formosa – a seasonally hypersaline lagoon in Portugal. We combine radon budgets that determine the total SGD (meteoric + recirculated seawater) in the system with stable isotopes in water (δ2H, δ18O), to specifically identify SGD source functions and characterize active hydrological pathways in the catchment. Using this approach, SGD in the Ria Formosa could be separated into two modes, a net meteoric water input and another involving no net water transfer, i.e., originating in lagoon water re-circulated through permeable sediments. The former SGD mode is present occasionally on a multi-annual timescale, while the latter is a dominant feature of the system. In the absence of meteoric SGD inputs, seawater recirculation through beach sediments occurs at a rate of  ∼  1.4  ×  106 m3 day−1. This implies that the entire tidal-averaged volume of the lagoon is filtered through local sandy sediments within 100 days ( ∼  3.5 times a year), driving an estimated nitrogen (N) load of  ∼  350 Ton N yr−1 into the system as NO3−. Land-borne SGD could add a further  ∼  61 Ton N yr−1 to the lagoon. The former source is autochthonous, continuous and responsible for a large fraction (59 %) of the estimated total N inputs into the system via non-point sources, while the latter is an occasional allochthonous source capable of driving new production in the system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.