7 resultados para LINK-BASED AND MULTIDIMENSIONAL QUERY LANGUAGE (LMDQL)

em DRUM (Digital Repository at the University of Maryland)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relevance of explicit instruction has been well documented in SLA research. Despite numerous positive findings, however, the issue continues to engage scholars worldwide. One issue that was largely neglected in previous empirical studies - and one that may be crucial for the effectiveness of explicit instruction - is the timing and integration of rules and practice. The present study investigated the extent to which grammar explanation (GE) before practice, grammar explanation during practice, and individual differences impact the acquisition of L2 declarative and procedural knowledge of two grammatical structures in Spanish. In this experiment, 128 English-speaking learners of Spanish were randomly assigned to four experimental treatments and completed comprehension-based task-essential practice for interpreting object-verb (OV) and ser/estar (SER) sentences in Spanish. Results confirmed the predicted importance of timing of GE: participants who received GE during practice were more likely to develop and retain their knowledge successfully. Results further revealed that the various combinations of rules and practice posed differential task demands on the learners and consequently drew on language aptitude and WM to a different extent. Since these correlations between individual differences and learning outcomes were the least observed in the conditions that received GE during practice, we argue that the suitable integration of rules and practice ameliorated task demands, reducing the burden on the learner, and accordingly mitigated the role of participants’ individual differences. Finally, some evidence also showed that the comprehension practice that participants received for the two structures was not sufficient for the formation of solid productive knowledge, but was more effective for the OV than for the SER construction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the course of integrating into the global market, especially since China’s WTO accession, China has achieved remarkable GDP growth and has become the second largest economy in the world. These economic achievements have substantially increased Chinese incomes and have generated more government revenue for social progress. However, China’s economic progress, in itself, is neither sufficient for achieving desirable development outcomes nor a guarantee for expanding peoples’ capabilities. In fact, a narrow emphasis on GDP growth proves to be unsustainable, and may eventually harm the life quality of Chinese citizens. Without the right set of policies, a deepening trade-openness policy in China may enlarge social disparities and some people may further be deprived of basic public services and opportunities. To address these concerns, this dissertation, a set of three essays in Chapters 2-4, examines the impact of China's WTO accession on income distribution, compares China’s income and multidimensional poverty reduction and investigates the factors, including the WTO accession, that predict multidimensional poverty. By exploiting the exogenous variation in exposure to tariff changes across provinces and over time, Chapter 2 (Essay 1) estimates the causal effects of trade shocks and finds that China’s WTO accession has led to an increase in average household income, but its impacts are not evenly distributed. Households in urban areas have benefited more significantly than those in rural areas. Households with members working in the private sector have benefited more significantly than those in the public sector. However, the WTO accession has contributed to reducing income inequality between higher and lower income groups. Chapter 3 (Essay 2) explains and applies the Alkire and Foster Method (AF Method), examines multidimensional poverty in China and compares it with income poverty. It finds that China’s multidimensional poverty has declined dramatically during the period from 1989-2011. Reduction rates and patterns, however, vary by dimensions: multidimensional poverty reduction exhibits unbalanced regional progress as well as varies by province and between rural and urban areas. In comparison with income poverty, multidimensional poverty reduction does not always coincide with economic growth. Moreover, if one applies a single measure ─ either that of income or multidimensional poverty ─ a certain proportion of those who are poor remain unrecognized. By applying a logistic regression model, Chapter 4 (Essay 3) examines factors that predict multidimensional poverty and finds that the major factors predicting multidimensional poverty in China include household size, education level of the household head, health insurance coverage, geographic location, and the openness of the local economy. In order to alleviate multidimensional poverty, efforts should be targeted to (i) expand education opportunities for the household heads with low levels of education, (ii) develop appropriate geographic policies to narrow regional gaps and (iii) make macroeconomic policies work for the poor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current trends in speech-language pathology focus on early intervention as the preferred tool for promoting the best possible outcomes in children with language disorders. Neuroimaging techniques are being studied as promising tools for flagging at-risk infants. In this study, the auditory brainstem response (ABR) to the syllables /ba/ and /ga/ was examined in 41 infants between 3 and 12 months of age as a possible tool to predict language development in toddlerhood. The MacArthur-Bates Communicative Development Inventory (MCDI) was used to assess language development at 18 months of age. The current study compared the periodicity of the responses to the stop consonants and phase differences between /ba/ and /ga/ in both at-risk and low-risk groups. The study also examined whether there are correlations among ABR measures (periodicity and phase differentiation) and language development. The study found that these measures predict language development at 18 months.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.