904 resultados para Incomplete Block-designs
Resumo:
The goal of this article is to provide a new design framework and its corresponding estimation for phase I trials. Existing phase I designs assign each subject to one dose level based on responses from previous subjects. Yet it is possible that subjects with neither toxicity nor efficacy responses can be treated at higher dose levels, and their subsequent responses to higher doses will provide more information. In addition, for some trials, it might be possible to obtain multiple responses (repeated measures) from a subject at different dose levels. In this article, a nonparametric estimation method is developed for such studies. We also explore how the designs of multiple doses per subject can be implemented to improve design efficiency. The gain of efficiency from "single dose per subject" to "multiple doses per subject" is evaluated for several scenarios. Our numerical study shows that using "multiple doses per subject" and the proposed estimation method together increases the efficiency substantially.
Resumo:
A decision-theoretic framework is proposed for designing sequential dose-finding trials with multiple outcomes. The optimal strategy is solvable theoretically via backward induction. However, for dose-finding studies involving k doses, the computational complexity is the same as the bandit problem with k-dependent arms, which is computationally prohibitive. We therefore provide two computationally compromised strategies, which is of practical interest as the computational complexity is greatly reduced: one is closely related to the continual reassessment method (CRM), and the other improves CRM and approximates to the optimal strategy better. In particular, we present the framework for phase I/II trials with multiple outcomes. Applications to a pediatric HIV trial and a cancer chemotherapy trial are given to illustrate the proposed approach. Simulation results for the two trials show that the computationally compromised strategy can perform well and appear to be ethical for allocating patients. The proposed framework can provide better approximation to the optimal strategy if more extensive computing is available.
Resumo:
Several articles in this journal have studied optimal designs for testing a series of treatments to identify promising ones for further study. These designs formulate testing as an ongoing process until a promising treatment is identified. This formulation is considered to be more realistic but substantially increases the computational complexity. In this article, we show that these new designs, which control the error rates for a series of treatments, can be reformulated as conventional designs that control the error rates for each individual treatment. This reformulation leads to a more meaningful interpretation of the error rates and hence easier specification of the error rates in practice. The reformulation also allows us to use conventional designs from published tables or standard computer programs to design trials for a series of treatments. We illustrate these using a study in soft tissue sarcoma.
Resumo:
The purpose of a phase I trial in cancer is to determine the level (dose) of the treatment under study that has an acceptable level of adverse effects. Although substantial progress has recently been made in this area using parametric approaches, the method that is widely used is based on treating small cohorts of patients at escalating doses until the frequency of toxicities seen at a dose exceeds a predefined tolerable toxicity rate. This method is popular because of its simplicity and freedom from parametric assumptions. In this payer, we consider cases in which it is undesirable to assume a parametric dose-toxicity relationship. We propose a simple model-free approach by modifying the method that is in common use. The approach assumes toxicity is nondecreasing with dose and fits an isotonic regression to accumulated data. At any point in a trial, the dose given is that with estimated toxicity deemed closest to the maximum tolerable toxicity. Simulations indicate that this approach performs substantially better than the commonly used method and it compares favorably with other phase I designs.
Resumo:
Space-time codes from complex orthogonal designs (CODs) with no zero entries offer low Peak to Average Power Ratio (PAPR) and avoid the problem of switching off antennas. But square CODs for 2(a) antennas with a + 1. complex variables, with no zero entries were discovered only for a <= 3 and if a + 1 = 2(k), for k >= 4. In this paper, a method of obtaining no zero entry (NZE) square designs, called Complex Partial-Orthogonal Designs (CPODs), for 2(a+1) antennas whenever a certain type of NZE code exists for 2(a) antennas is presented. Then, starting from a so constructed NZE CPOD for n = 2(a+1) antennas, a construction procedure is given to obtain NZE CPODs for 2n antennas, successively. Compared to the CODs, CPODs have slightly more ML decoding complexity for rectangular QAM constellations and the same ML decoding complexity for other complex constellations. Using the recently constructed NZE CODs for 8 antennas our method leads to NZE CPODs for 16 antennas. The class of CPODs do not offer full-diversity for all complex constellations. For the NZE CPODs presented in the paper, conditions on the signal sets which will guarantee full-diversity are identified. Simulation results show that bit error performance of our codes is same as that of the CODs under average power constraint and superior to CODs under peak power constraint.
Resumo:
Tämä tutkielma on osa Helsingin yliopiston rahoittamaa HY-talk -tutkimusprojektia, jonka tavoite on vankentaa puheviestinnän, erityisesti vieraiden kielten suullisen taidon opetusta ja arviointia yleissivistävässä koulutuksessa ja korkeakouluasteella. Tämän tutkielman tavoite on selvittää millaisia korjauksia englantia vieraana kielenä puhuvat ihmiset tekevät puheeseensa ja tutkia itsekorjauksen ja sujuvuuden välistä suhdetta. Korjausjäsennystä ja itsekorjausta on aiemmin tutkittu sekä keskustelunanalyysin että psykolingvistiikan aloilla, ja vaikka tämä tutkielma onkin lähempänä aiempaa keskustelunanalyyttistä kuin psykolingvististä tutkimusta, siinä hyödynnetään molempia suuntauksia. Itsekorjausta on yleisesti pidetty merkkinä erityisesti ei-natiivien kielenpuhujien sujuvuuden puutteesta. Tämän tutkielman tarkoitus on selvittää, kuinka läheisesti itsekorjaus todella liittyy sujuvuuteen tai sen puutteeseen. Tutkielman materiaali koostuu HY-talk -projektia varten kerätyistä puhenäytteistä ja niiden pohjalta tehdyistä taitotasoarvioinneista. Puhenäytteet kerättiin vuonna 2007 projektia varten järjestettyjen puhekielen testaustilanteiden yhteydessä kolmessa eteläsuomalaisessa koulussa. Koska projektin tavoitteena on tutkia ja parantaa kielten suullisen taidon arviointia, projektissa mukana olleet kieliammattilaiset arvioivat puhujien taitotasot projektia varten (Eurooppalaisen Viitekehyksen taitotasokuvainten pohjalta) koottujen arviointiasteikoiden perusteella, ja nämä arvioinnit tallennettiin osaksi projektin materiaalia. Tutkielmassa analysoidaan itsekorjauksia aiemman psykolingvistisen tutkimuksen pohjalta kootun korjaustyyppiluokituksen sekä tätä tutkielmaa varten luodun korjausten oikeellisuutta vertailevan luokituksen avulla. Lisäksi siinä vertaillaan kahden korkeamman ja kahden matalamman taitotasoarvioinnin saaneen puhujan itsekorjauksia. Tulokset osoittavat, että ei-natiivien puheessa esiintyy monenlaisia eri korjaustyyppejä, ja että yleisimpiä korjauksia ovat alkuperäisen lausuman toistot. Yleisiä ovat myös korjaukset, joissa puhuja korjaa virheen tai keskeyttää puheensa ja aloittaa kokonaan uuden lausuman. Lisäksi tuloksista käy ilmi, ettei suurin osa korjauksista todennäköisesti johdu puhujien sujuvuuden puutteesta. Yleisimmät korjaustyypit voivat johtua suurimmaksi osaksi yksilön puhetyylistä, siitä, että puhuja hakee jotain tiettyä sanaa tai ilmausta mielessään tai siitä, että puhuja korjaa puheessaan huomaamansa kieliopillisen, sanastollisen tai äänteellisen virheen. Vertailu korkeammalle ja matalammalle taitotasolle arvioitujen puhujien välillä osoittaa selkeimmin, ettei suurin osa itsekorjauksista ole yhteydessä puhujan sujuvuuteen. Vertailusta käy ilmi, ettei pelkkä itsekorjausten määrä kerro kuinka sujuvasti puhuja käyttää kieltä, sillä toinen korkeammalle taitotasolle arvioiduista puhujista korjaa puhettaan lähes yhtä monesti kuin matalammalle tasolle arvioidut puhujat. Lisäksi korjausten oikeellisuutta vertailevan luokituksen tulokset viittaavat siihen, etteivät niin korkeammalle kuin matalammallekaan tasolle arvioidut puhujat useimmiten korjaa puhettaan siksi, etteivät pystyisi ilmaisemaan viestiään oikein ja ymmärrettävästi.
Resumo:
A retail precinct along James Street in Brisbane is made more permeable by Richards and Spence.
Resumo:
It is well known that space-time block codes (STBCs) obtained from orthogonal designs (ODs) are single-symbol decodable (SSD) and from quasi-orthogonal designs (QODs) are double-symbol decodable (DSD). However, there are SSD codes that are not obtainable from ODs and DSD codes that are not obtainable from QODs. In this paper, a method of constructing g-symbol decodable (g-SD) STBCs using representations of Clifford algebras are presented which when specialized to g = 1, 2 gives SSD and DSD codes, respectively. For the number of transmit antennas 2(a) the rate (in complex symbols per channel use) of the g-SD codes presented in this paper is a+1-g/2(a-9). The maximum rate of the DSD STBCs from QODs reported in the literature is a/2(a-1) which is smaller than the rate a-1/2(a-2) of the DSD codes of this paper, for 2(a) transmit antennas. In particular, the reported DSD codes for 8 and 16 transmit antennas offer rates 1 and 3/4, respectively, whereas the known STBCs from QODs offer only 3/4 and 1/2, respectively. The construction of this paper is applicable for any number of transmit antennas. The diversity sum and diversity product of the new DSD codes are studied. It is shown that the diversity sum is larger than that of all known QODs and hence the new codes perform better than the comparable QODs at low signal-to-noise ratios (SNRs) for identical spectral efficiency. Simulation results for DSD codes at variousspectral efficiencies are provided.
Resumo:
Abstract is not available.
Resumo:
This project tested modified gillnets designed by commercial net fishers in the Queensland East Coast Inshore Finfish Fishery (ECIFF) to try and identify gears that would mitigate and/or improve interactions between fishing nets and Species of Conservation Interest (SOCI). The study also documents previously unrecognised initiatives by pro-active commercial net fishers that reflect a conservation-minded approach to their fishing practices, which is the opposite of what is perceived publicly. Between 2011 and 2014, scientists from James Cook University and the Queensland Department of Agriculture and Fisheries teamed with commercial fishers representing the Queensland Seafood Industry Association and the Moreton Bay Seafood Industry Association to conduct field trials of various modified net designs under normal fishery conditions. Trials were conducted in Moreton Bay (southern part of the fishery) and Bowling Green Bay (northern) and tested different net designs developed by fishers to improve the nature of interactions between net fishing gear and SOCI.
Resumo:
This block was used in the printing of "Who's Who in American Education"
Resumo:
This block was used in the printing of "Who's Who in American Education"
Resumo:
Reaction of 2-pyridinecarboxaldehyde [(Py)CHO] with Cu(NO3)2·2.5H2O in the presence of 4-aminopyridine and NaN3 in MeOH lead to an incomplete double-cubane [Cu4{PyCH(O)(OMe)}4(N3)4] (1) in 87% isolated yield, representing a rare type of metal cluster containing bridging hemiacetalate ligand [pyCH(O)(OMe)]−1 which was characterized by single crystal structure analysis and variable temperature magnetic behavior.