989 resultados para Digital Common
Resumo:
This paper introduces a new construct that we term Math Mediated Language (MML) focusing on the notion that common or everyday terms with mathematical meanings are important building blocks for students’ mathematical reasoning. A survey given to 96 pre-service early childhood educators indicated clear patterns of perceptions of these terms.
Resumo:
Context: Accurately determining hydration status is a preventative measure for exertional heat illnesses (EHI). Objective: To determine the validity of various field measures of urine specific gravity (Usg) compared to laboratory instruments. Design: Observational research design to compare measures of hydration status: urine reagent strips (URS) and a urine color (Ucol) chart to a refractometer. Setting: We utilized the athletic training room of a Division I-A collegiate American football team. Participants: Trial 1 involved urine samples of 69 veteran football players (age=20.1+1.2yr; body mass=229.7+44.4lb; height=72.2+2.1in). Trial 2 involved samples from 5 football players (age=20.4+0.5yr; body mass=261.4+39.2lb; height=72.3+2.3in). Interventions: We administered the Heat Illness Index Score (HIIS) Risk Assessment, to identify athletes at-risk for EHI (Trial 1). For individuals “at-risk” (Trial 2), we collected urine samples before and after 15 days of pre-season “two-a-day” practices in a hot, humid environment(mean on-field WBGT=28.84+2.36oC). Main Outcome Measures: Urine samples were immediately analyzed for Usg using a refractometer, Diascreen 7® (URS1), Multistix® (URS2), and Chemstrip10® (URS3). Ucol was measured using Ucol chart. We calculated descriptive statistics for all main measures; Pearson correlations to assess relationships between the refractometer, each URS, and Ucol, and transformed Ucol data to Z-scores for comparison to the refractometer. Results: In Trial 1, we found a moderate relationship (r=0.491, p<.01) between URS1 (1.020+0.006μg) and the refractometer (1.026+0.010μg). In Trial 2, we found marked relationships for Ucol (5.6+1.6shades, r=0.619, p<0.01), URS2 (1.019+0.008μg, r=0.712, p<0.01), and URS3 (1.022+0.007μg, r=0.689, p<0.01) compared to the refractometer (1.028+0.008μg). Conclusions: Our findings suggest that URS were inconsistent between manufacturers, suggesting practitioners use the clinical refractometer to accurately determine Usg and monitor hydration status.
Resumo:
Gene-based tests of association are frequently applied to common SNPs (MAF>5%) as an alternative to single-marker tests. In this analysis we conduct a variety of simulation studies applied to five popular gene-based tests investigating general trends related to their performance in realistic situations. In particular, we focus on the impact of non-causal SNPs and a variety of LD structures on the behavior of these tests. Ultimately, we find that non-causal SNPs can significantly impact the power of all gene-based tests. On average, we find that the “noise” from 6–12 non-causal SNPs will cancel out the “signal” of one causal SNP across five popular gene-based tests. Furthermore, we find complex and differing behavior of the methods in the presence of LD within and between non-causal and causal SNPs. Ultimately, better approaches for a priori prioritization of potentially causal SNPs (e.g., predicting functionality of non-synonymous SNPs), application of these methods to sequenced or fully imputed datasets, and limited use of window-based methods for assigning inter-genic SNPs to genes will improve power. However, significant power loss from non-causal SNPs may remain unless alternative statistical approaches robust to the inclusion of non-causal SNPs are developed.
Resumo:
Traditional Optics has provided ways to compensate some common visual limitations (up to second order visual impairments) through spectacles or contact lenses. Recent developments in wavefront science make it possible to obtain an accurate model of the Point Spread Function (PSF) of the human eye. Through what is known as the "Wavefront Aberration Function" of the human eye, exact knowledge of the optical aberration of the human eye is possible, allowing a mathematical model of the PSF to be obtained. This model could be used to pre-compensate (inverse-filter) the images displayed on computer screens in order to counter the distortion in the user's eye. This project takes advantage of the fact that the wavefront aberration function, commonly expressed as a Zernike polynomial, can be generated from the ophthalmic prescription used to fit spectacles to a person. This allows the pre-compensation, or onscreen deblurring, to be done for various visual impairments, up to second order (commonly known as myopia, hyperopia, or astigmatism). The technique proposed towards that goal and results obtained using a lens, for which the PSF is known, that is introduced into the visual path of subjects without visual impairment will be presented. In addition to substituting the effect of spectacles or contact lenses in correcting the loworder visual limitations of the viewer, the significance of this approach is that it has the potential to address higher-order abnormalities in the eye, currently not correctable by simple means.
Resumo:
The diversity of ethnic and cultural groups and the effects of language in the therapeutic relationship are timely professional issues of concern to occupational therapy practitioners. The tri-ethnic, tri-cultural South Florida area offers a natural environment where one can study how patient-therapist interactions are influenced by language barriers in a diverse society. This study examines the effects of language on the adequacy of occupational therapy services, specifically how language affects the length of the treatment program. The nature of diagnosis therapists' ethnicity, and how they impact treatment outcomes are also addressed. A sample was drawn from the occupational therapy outpatient department of a large county hospital. Data taken from patients' charts examined race, sex, age, diagnosis, and language. Number of treatment sessions and length of treatment were viewed as proxy measures for adequacy. Findings indicate that the effect of language cannot be understood aside from ethnicity. Implications for occupational therapy practice are discussed.
Resumo:
In the wake of a steadily increasing diversity in ethnicity among Blacks in the United States, efforts need to be made to analyze and understand the dynamics of the relations among the various Black ethnic groups in the United States. This thesis explores the present state of relations among these groups by utilizing an extensive literature review on the topic in conjunction with in-depth interviews. What is of particular interest here are the differing and similar intergroup perspectives on self-identity, as well as any cultural similarities and dissimilarities that exist. We find that the cultural dissimilarities create barriers to harmonious relations among the groups, while particular ideologies such as Pan-Africanism and Black nationalism provide the basis for strong unified fronts and partnerships for those who embrace them.
Resumo:
http://digitalcommons.fiu.edu/fce_lter_photos/1287/thumbnail.jpg
Resumo:
http://digitalcommons.fiu.edu/fce_lter_photos/1289/thumbnail.jpg
Resumo:
http://digitalcommons.fiu.edu/fce_lter_photos/1284/thumbnail.jpg
Resumo:
Background: Digital forensics is a rapidly expanding field, due to the continuing advances in computer technology and increases in data stage capabilities of devices. However, the tools supporting digital forensics investigations have not kept pace with this evolution, often leaving the investigator to analyse large volumes of textual data and rely heavily on their own intuition and experience. Aim: This research proposes that given the ability of information visualisation to provide an end user with an intuitive way to rapidly analyse large volumes of complex data, such approached could be applied to digital forensics datasets. Such methods will be investigated; supported by a review of literature regarding the use of such techniques in other fields. The hypothesis of this research body is that by utilising exploratory information visualisation techniques in the form of a tool to support digital forensic investigations, gains in investigative effectiveness can be realised. Method:To test the hypothesis, this research examines three different case studies which look at different forms of information visualisation and their implementation with a digital forensic dataset. Two of these case studies take the form of prototype tools developed by the researcher, and one case study utilises a tool created by a third party research group. A pilot study by the researcher is conducted on these cases, with the strengths and weaknesses of each being drawn into the next case study. The culmination of these case studies is a prototype tool which was developed to resemble a timeline visualisation of the user behaviour on a device. This tool was subjected to an experiment involving a class of university digital forensics students who were given a number of questions about a synthetic digital forensic dataset. Approximately half were given the prototype tool, named Insight, to use, and the others given a common open-source tool. The assessed metrics included: how long the participants took to complete all tasks, how accurate their answers to the tasks were, and how easy the participants found the tasks to complete. They were also asked for their feedback at multiple points throughout the task. Results:The results showed that there was a statistically significant increase in accuracy for one of the six tasks for the participants using the Insight prototype tool. Participants also found completing two of the six tasks significantly easier when using the prototype tool. There were no statistically significant different difference between the completion times of both participant groups. There were no statistically significant differences in the accuracy of participant answers for five of the six tasks. Conclusions: The results from this body of research show that there is evidence to suggest that there is the potential for gains in investigative effectiveness when information visualisation techniques are applied to a digital forensic dataset. Specifically, in some scenarios, the investigator can draw conclusions which are more accurate than those drawn when using primarily textual tools. There is also evidence so suggest that the investigators found these conclusions to be reached significantly more easily when using a tool with a visual format. None of the scenarios led to the investigators being at a significant disadvantage in terms of accuracy or usability when using the prototype visual tool over the textual tool. It is noted that this research did not show that the use of information visualisation techniques leads to any statistically significant difference in the time taken to complete a digital forensics investigation.
Resumo:
Certain environments can inhibit learning and stifle enthusiasm, while others enhance learning or stimulate curiosity. Furthermore, in a world where technological change is accelerating we could ask how might architecture connect resource abundant and resource scarce innovation environments? Innovation environments developed out of necessity within urban villages and those developed with high intention and expectation within more institutionalized settings share a framework of opportunity for addressing change through learning and education. This thesis investigates formal and informal learning environments and how architecture can stimulate curiosity, enrich learning, create common ground, and expand access to education. The reason for this thesis exploration is to better understand how architects might design inclusive environments that bring people together to build sustainable infrastructure encouraging innovation and adaptation to change for years to come. The context of this thesis is largely based on Colin McFarlane’s theory that the “city is an assemblage for learning” The socio-spatial perspective in urbanism, considers how built infrastructure and society interact. Through the urban realm, inhabitants learn to negotiate people, space, politics, and resources affecting their daily lives. The city is therefore a dynamic field of emergent possibility. This thesis uses the city as a lens through which the boundaries between informal and formal logics as well as the public and private might be blurred. Through analytical processes I have examined the environmental devices and assemblage of factors that consistently provide conditions through which learning may thrive. These parameters that make a creative space significant can help suggest the design of common ground environments through which innovation is catalyzed.
Resumo:
Institutions are widely regarded as important, even ultimate drivers of economic growth and performance. A recent mainstream of institutional economics has concentrated on the effect of persisting, often imprecisely measured institutions and on cataclysmic events as agents of noteworthy institutional change. As a consequence, institutional change without large-scale shocks has received little attention. In this dissertation I apply a complementary, quantitative-descriptive approach that relies on measures of actually enforced institutions to study institutional persistence and change over a long time period that is undisturbed by the typically studied cataclysmic events. By placing institutional change into the center of attention one can recognize different speeds of institutional innovation and the continuous coexistence of institutional persistence and change. Specifically, I combine text mining procedures, network analysis techniques and statistical approaches to study persistence and change in England’s common law over the Industrial Revolution (1700-1865). Based on the doctrine of precedent - a peculiarity of common law systems - I construct and analyze the apparently first citation network that reflects lawmaking in England. Most strikingly, I find large-scale change in the making of English common law around the turn of the 19th century - a period free from the typically studied cataclysmic events. Within a few decades a legal innovation process with low depreciation rates (1 to 2 percent) and strong past-persistence transitioned to a present-focused innovation process with significantly higher depreciation rates (4 to 6 percent) and weak past-persistence. Comparison with U.S. Supreme Court data reveals a similar U.S. transition towards the end of the 19th century. The English and U.S. transitions appear to have unfolded in a very specific manner: a new body of law arose during the transitions and developed in a self-referential manner while the existing body of law lost influence, but remained prominent. Additional findings suggest that Parliament doubled its influence on the making of case law within the first decades after the Glorious Revolution and that England’s legal rules manifested a high degree of long-term persistence. The latter allows for the possibility that the often-noted persistence of institutional outcomes derives from the actual persistence of institutions.
Resumo:
Many applications, including communications, test and measurement, and radar, require the generation of signals with a high degree of spectral purity. One method for producing tunable, low-noise source signals is to combine the outputs of multiple direct digital synthesizers (DDSs) arranged in a parallel configuration. In such an approach, if all noise is uncorrelated across channels, the noise will decrease relative to the combined signal power, resulting in a reduction of sideband noise and an increase in SNR. However, in any real array, the broadband noise and spurious components will be correlated to some degree, limiting the gains achieved by parallelization. This thesis examines the potential performance benefits that may arise from using an array of DDSs, with a focus on several types of common DDS errors, including phase noise, phase truncation spurs, quantization noise spurs, and quantizer nonlinearity spurs. Measurements to determine the level of correlation among DDS channels were made on a custom 14-channel DDS testbed. The investigation of the phase noise of a DDS array indicates that the contribution to the phase noise from the DACs can be decreased to a desired level by using a large enough number of channels. In such a system, the phase noise qualities of the source clock and the system cost and complexity will be the main limitations on the phase noise of the DDS array. The study of phase truncation spurs suggests that, at least in our system, the phase truncation spurs are uncorrelated, contrary to the theoretical prediction. We believe this decorrelation is due to the existence of an unidentified mechanism in our DDS array that is unaccounted for in our current operational DDS model. This mechanism, likely due to some timing element in the FPGA, causes some randomness in the relative phases of the truncation spurs from channel to channel each time the DDS array is powered up. This randomness decorrelates the phase truncation spurs, opening the potential for SFDR gain from using a DDS array. The analysis of the correlation of quantization noise spurs in an array of DDSs shows that the total quantization noise power of each DDS channel is uncorrelated for nearly all values of DAC output bits. This suggests that a near N gain in SQNR is possible for an N-channel array of DDSs. This gain will be most apparent for low-bit DACs in which quantization noise is notably higher than the thermal noise contribution. Lastly, the measurements of the correlation of quantizer nonlinearity spurs demonstrate that the second and third harmonics are highly correlated across channels for all frequencies tested. This means that there is no benefit to using an array of DDSs for the problems of in-band quantizer nonlinearities. As a result, alternate methods of harmonic spur management must be employed.
Resumo:
Advances in digital photography and distribution technologies enable many people to produce and distribute images of their sex acts. When teenagers do this, the photos and videos they create can be legally classified as child pornography since the law makes no exception for youth who create sexually explicit images of themselves. The dominant discussions about teenage girls producing sexually explicit media (including sexting) are profoundly unproductive: (1) they blame teenage girls for creating private images that another person later maliciously distributed and (2) they fail to respect—or even discuss—teenagers’ rights to freedom of expression. Cell phones and the internet make producing and distributing images extremely easy, which provide widely accessible venues for both consensual sexual expression between partners and for sexual harassment. Dominant understandings view sexting as a troubling teenage trend created through the combination of camera phones and adolescent hormones and impulsivity, but this view often conflates consensual sexting between partners with the malicious distribution of a person’s private image as essentially equivalent behaviors. In this project, I ask: What is the role of assumptions about teen girls’ sexual agency in these problematic understandings of sexting that blame victims and deny teenagers’ rights? In contrast to the popular media panic about online predators and the familiar accusation that youth are wasting their leisure time by using digital media, some people champion the internet as a democratic space that offers young people the opportunity to explore identities and develop social and communication skills. Yet, when teen girls’ sexuality enters this conversation, all this debate and discussion narrows to a problematic consensus. The optimists about adolescents and technology fall silent, and the argument that media production is inherently empowering for girls does not seem to apply to a girl who produces a sexually explicit image of herself. Instead, feminist, popular, and legal commentaries assert that she is necessarily a victim: of a “sexualized” mass media, pressure from her male peers, digital technology, her brain structures or hormones, or her own low self-esteem and misplaced desire for attention. Why and how are teenage girls’ sexual choices produced as evidence of their failure or success in achieving Western liberal ideals of self-esteem, resistance, and agency? Since mass media and policy reactions to sexting have so far been overwhelmingly sexist and counter-productive, it is crucial to interrogate the concepts and assumptions that characterize mainstream understandings of sexting. I argue that the common sense that is co-produced by law and mass media underlies the problematic legal and policy responses to sexting. Analyzing a range of nonfiction texts including newspaper articles, talk shows, press releases, public service announcements, websites, legislative debates, and legal documents, I investigate gendered, racialized, age-based, and technologically determinist common sense assumptions about teenage girls’ sexual agency. I examine the consensus and continuities that exist between news, nonfiction mass media, policy, institutions, and law, and describe the limits of their debates. I find that this early 21st century post-feminist girl-power moment not only demands that girls live up to gendered sexual ideals but also insists that actively choosing to follow these norms is the only way to exercise sexual agency. This is the first study to date examining the relationship of conventional wisdom about digital media and teenage girls’ sexuality to both policy and mass media.