id
stringlengths
22
42
metadata
dict
text
stringlengths
9
1.03M
proofpile-shard-0030-300
{ "provenance": "003.jsonl.gz:301" }
# Kink solutions in a simplified model of Polyacetylene Orateur: SÉRÉ Éric Localisation: Université Paris Dauphine, France Type: Séminaire problèmes spectraux en physique mathématique Site: IHP Salle: 314 Date de début: 15/04/2013 - 14:00 Date de fin: 15/04/2013 - 14:00 We consider a simplified model of Polyacetylene introduced by Su, Schrieffer and Cheeger in 1979, which belongs to the class of Peierls models at half-filling. In 1987 Kennedy and Lieb studied finite chains and proved that if the number $N$ of nuclei is even, the energy has exactly two minimisers which are periodic of period $2$, and are translates of one another by a translation of one unit in the lattice. We study rigorously the case of an odd number of atoms. We prove that if $N$ is odd and converges to infinity, the global minimizer of the energy converges to a "kink" soliton in the infinite chain. This soliton is asymptotic to one of the periodic minimizers found by Kennedy-Lieb in one direction of the chain, and to the other solution in the other direction. This is joint work with Mauricio Garcia Arroyo.
proofpile-shard-0030-301
{ "provenance": "003.jsonl.gz:302" }
## Introductory Algebra for College Students (7th Edition) x=$\triangle$-$\square$ Subtract $\square$ from both sides of the equation to isolate the variable. x+$\square$=$\triangle$ x=$\triangle$-$\square$
proofpile-shard-0030-302
{ "provenance": "003.jsonl.gz:303" }
# Question #a9c8c Feb 3, 2018 $354$ males and $431$ females. #### Explanation: Let's assume the number of male students is $x$. Therefore if there are $77$ more females than males, the number of females is $x + 77$. The total number of students is number of male $+$ number of female students. Therefore, the total number of students is $x + x + 77 = 785$ So $2 x + 77 = 785$ Subtracting $77$ from both sides gives us $2 x = 708$ Dividing both sides by $2$ gives us the value of $x$, which is $354$. Therefore, there are $354$ males and $431$ $\left(354 + 77\right)$ females.
proofpile-shard-0030-303
{ "provenance": "003.jsonl.gz:304" }
## Guessing at the discretization, of the Sallen-Key Filter, with Q-Multiplier. One concept that exists in modern digital signal processing is, that a simple algorithm can often be written, to perform what old-fashioned, analog filters were able to do. But then, one place where I find lacking progress – at least, where I can find the information posted publicly – is, about how to discretize slightly more complicated analog filters. Specifically, if one wants to design 2nd-order low-pass or high-pass filters, one approach which is often recommended is, just to chain the primitive low-pass or high-pass filters. The problem with that is, the highly damped frequency-response curve that follows, which is evident, in the attenuated voltage gain, at the cutoff frequency itself. In analog circuitry, a solution to this problem exists in the “Sallen-Key Filter“, which naturally has a gain at the corner frequency of (-6db), which would also result, if two primitive filters were simply chained. But beyond that, the analog filter can be given (positive) feedback gain, in order to increase its Q-factor. I set out to write some pseudo-code, for how such a filter could also be converted into algorithms… Second-Order... LP: for i from 1 to n Y[i] := ( k * Y[i-1] ) + ((1 - k) * X[i]) + Feedback[i-1] Z[i] := ( k * Z[i-1] ) + ((1 - k) * Y[i]) Feedback[i] := (Z[i] - Z[i-1]) * k * α (output Z[i]) BP: for i from 1 to n Y[i] := ( k * Y[i-1] ) + ((1 - k) * X[i]) + Feedback[i-1] Z[i] := ( k * (Z[i-1] + Y[i] - Y[i-1]) ) Feedback[i] := Z[i] * (1 - k) * α (output Z[i]) HP: for i from 1 to n Y[i] := ( k * (Y[i-1] + X[i] - X[i-1]) ) + Feedback[i-1] Z[i] := ( k * (Z[i-1] + Y[i] - Y[i-1]) ) Feedback[i] := Z[i] * (1 - k) * α (output Z[i]) Where: k is the constant that defines the corner frequency via ω, And α is the constant that peaks the Q-factor. ω = 2 * sin(π * F0 / h) k = 1 / (1 + ω), F0 < (h / 4) h Is the sample-rate. F0 Is the corner frequency. To achieve a Q-factor (Q): α = (2 + (sin^2(π * F0 / h) * 2) - (1 / Q)) 'Damping Factor' = (ζ) = 1 / (2 * Q) Critical Damping: ζ = 1 / sqrt(2) (...) Q = 1 / sqrt(2) (Algorithm Revised 2/08/2021, 23h40. ) (Computation of parameters Revised 2/09/2021, 2h15. ) (Updated 2/10/2021, 18h25… )
proofpile-shard-0030-304
{ "provenance": "003.jsonl.gz:305" }
# C Program To Check An Armstrong Number This C program checks an input number whether it is an Armstrong number on not and prints the results. An Armstrong number is special number sum of the nth power of its every digit is equal to the original number. The n is the number of digits in the input number. This concept is explained in more detail in the problem section. We compiled the program using Dev C++ version 4 compiler installed on Windows 7 64-bit. You can use a different compiler if you like after modifying the source code according to your compiler specifications. This is necessary to compile an error-free program. You must be familiar with following C programming concept to understand this example. ## Problem Definition What is an Armstrong Number? Suppose there is a number N, this number has n digits, if 1. We take each digit of the number N separately 2. Compute nth power of each digit, and 3. Take the sum, S of all the power obtained in step 2. 4. If the sum is equal to original number, N; 5. Then the given number N is an Armstrong number. For Example \begin{aligned} &371 = 3^3 + 7^3 + 1^3\\ \\ &= 27 + 343 + 1\\ \\ &=371 \end{aligned} The number N in above example is 371 and number of digits n = 3. So we raise each of the digits of 371 to power to 3. Then the sum of all power is equal to the original number if the number is an Armstrong number. ### How do we process the Input number N? Step 1 – Get the number N Step 2 – Split each digit of the number Step 3 – Raise each digit to the power of 3 and take a sum of all the powers. Step 4 – Repeat step 1 to 3 if number N not equal to 0. Step 5 – Check if Sum == N after Step 4, N = = 0. Step 6 – Print the Output. ## Program Code – Armstrong Number #include <stdio.h> #include <conio.h> main() { int n,i,n1,rem,num =0; /* Read the input number */ printf("Enter a positive integer:"); scanf("%d",&n); /*Get the sum of nth power of each digit */ n1 = n; while(n1 != 0) { rem = n1 % 10; num += rem * rem * rem; n1/=10; } /* Check if the number is an Armstrong or Not */ if(num == n) { /*Print the result */ for(i =0;i<30;i++) { printf("_"); } printf("\n\n"); printf(" %d is an Armstrong number\n",n); for(i =0;i<30;i++) { printf("_"); } printf("\n\n"); } else { for(i =0;i<30;i++) { printf("_"); printf("\n\n"); } printf("%d is not an Armstrong number",n); for(i =0;i<30;i++) { printf("_"); printf("\n\n"); } } getch(); return 0; } ## Output Here is the output of the program for an input integer 371. Enter a positive integer:371 ____________________________ 371 is an Armstrong number ____________________________
proofpile-shard-0030-305
{ "provenance": "003.jsonl.gz:306" }
# Pythagoras ## English Wikipedia has an article on: Wikipedia ### Etymology From Ancient Greek Πυθαγόρας (Puthagóras). ### Proper noun Pythagoras 1. An Ancient Greek mathematician and philosopher 2. (mathematics, colloquial) Pythagoras' theorem. • Serge Lang and Gene Murrow (1988) Geometry, ISBN 0387966544, page 203: “By Pythagoras, we find the length of the third side, |AB|2 = (2a)2 – a2 = 4a2 – a2 = 3a2 3. A male given name of mostly historical use, and a transliteration from modern Greek. Pythagoras m Pythagoras
proofpile-shard-0030-306
{ "provenance": "003.jsonl.gz:307" }
# Power of Selective Memory. Slide 1 The Power of Selective Memory Shai Shalev-Shwartz Joint work with Ofer Dekel, Yoram Singer Hebrew University, Jerusalem. ## Presentation on theme: "Power of Selective Memory. Slide 1 The Power of Selective Memory Shai Shalev-Shwartz Joint work with Ofer Dekel, Yoram Singer Hebrew University, Jerusalem."— Presentation transcript: Power of Selective Memory. Slide 1 The Power of Selective Memory Shai Shalev-Shwartz Joint work with Ofer Dekel, Yoram Singer Hebrew University, Jerusalem Power of Selective Memory. Slide 2 Outline Online learning, loss bounds etc. Hypotheses space – PST Margin of prediction and hinge-loss An online learning algorithm Trading margin for depth of the PST Automatic calibration A self-bounded online algorithm for learning PSTs Power of Selective Memory. Slide 3 Online Learning For Get an instance Predict a target based on Get true update and suffer loss Update prediction mechanism Power of Selective Memory. Slide 4 Analysis of Online Algorithm Relative loss bounds (external regret): For any fixed hypothesis h : Power of Selective Memory. Slide 5 Prediction Suffix Tree (PST) Each hypothesis is parameterized by a triplet: context function Power of Selective Memory. Slide 6 PST Example 0 -3 1 4 -2 7 Power of Selective Memory. Slide 7 Margin of Prediction Margin of prediction Hinge loss Power of Selective Memory. Slide 8 Complexity of hypothesis Define the complexity of hypothesis as We can also extend g s.t. and get Power of Selective Memory. Slide 9 Algorithm I : Learning Unbounded-Depth PST Init: For t=1,2,… Get and predict Get and suffer loss Set Update weight vector Update tree Power of Selective Memory. Slide 10 Example y = 0 y = ? Power of Selective Memory. Slide 11 Example y = + 0 y = ? Power of Selective Memory. Slide 12 Example y = + 0 y = ?? Power of Selective Memory. Slide 13 Example y = +- 0 y = ?? -.23 + Power of Selective Memory. Slide 14 Example y = +- 0 y = ??? -.23 + Power of Selective Memory. Slide 15 Example y = +-+ 0 y = ??? -.23 +.23.16 + - Power of Selective Memory. Slide 16 Example y = +-+ 0 y = ???- -.23 +.23.16 + - Power of Selective Memory. Slide 17 Example y = +-+- 0 y = ???- -.42 +.23.16 + - -.14 -.09 + - Power of Selective Memory. Slide 18 Example y = +-+- 0 y = ???-+ -.42 +.23.16 + - -.14 -.09 + - Power of Selective Memory. Slide 19 Example y = +-+-+ 0 y = ???-+ -.42 +.41.29 + - -.14 -.09 + -.09.06 + - Power of Selective Memory. Slide 20 Analysis Let be a sequence of examples and assume that Let be an arbitrary hypothesis Let be the loss of on the sequence of examples. Then, Power of Selective Memory. Slide 21 Proof Sketch Define Upper bound Lower bound Upper + lower bounds give the bound in the theorem Power of Selective Memory. Slide 22 Proof Sketch (Cont.) Where does the lower bound come from? For simplicity, assume that and Define a Hilbert space: The context function g t+1 is the projection of g t onto the half-space where f is the function Power of Selective Memory. Slide 23 Example revisited The following hypothesis has cumulative loss of 2 and complexity of 2. Therefore, the number of mistakes is bounded above by 12. y = +-+-+-+- Power of Selective Memory. Slide 24 Example revisited The following hypothesis has cumulative loss of 1 and complexity of 4. Therefore, the number of mistakes is bounded above by 18. But, this tree is very shallow 0 1.41 -1.41 + - y = +-+-+-+- Problem: The tree we learned is much more deeper ! Power of Selective Memory. Slide 25 Geometric Intuition Power of Selective Memory. Slide 26 Geometric Intuition (Cont.) Lets force g t+1 to be sparse by “canceling” the new coordinate Power of Selective Memory. Slide 27 Geometric Intuition (Cont.) Now we can show that: Power of Selective Memory. Slide 28 Trading margin for sparsity We got that If is much smaller than we can get a loss bound ! Problem: What happens if is very small and therefore ? Solution: Tolerate small margin errors ! Conclusion: If we tolerate small margin errors, we can get a sparser tree Power of Selective Memory. Slide 29 Automatic Calibration Problem: The value of is unknown Solution: Use the data itself to estimate it ! More specifically: Denote If we keep then we get a mistake bound Power of Selective Memory. Slide 30 Algorithm II : Learning Self Bounded-Depth PST Init: For t=1,2,… Get and predict Get and suffer loss If do nothing! Otherwise: Set Update w and the tree as in Algo. I, up to depth d t Power of Selective Memory. Slide 31 Analysis – Loss Bound Let be a sequence of examples and assume that Let be an arbitrary hypothesis Let be the loss of on the sequence of examples. Then, Power of Selective Memory. Slide 32 Analysis – Bounded depth Under the previous conditions, the depth of all the trees learned by the algorithm is bounded above by Power of Selective Memory. Slide 33 Example revisited Performance of Algo. II y = + - + - + - + - … Only 3 mistakes The last PST is of depth 5 The margin is 0.61 (after normalization) The margin of the max margin tree (of infinite depth) is 0.7071 0 -.55 +.55.39 + - -. 22 -.07 + -.07.05 -.03 -.05 - + - Power of Selective Memory. Slide 34 Conclusions Discriminative online learning of PSTs Loss bound Trade margin and sparsity Automatic calibration Future work Experiments Features selection and extraction Support vectors selection Download ppt "Power of Selective Memory. Slide 1 The Power of Selective Memory Shai Shalev-Shwartz Joint work with Ofer Dekel, Yoram Singer Hebrew University, Jerusalem." Similar presentations
proofpile-shard-0030-307
{ "provenance": "003.jsonl.gz:308" }
# 4.3 Newton’s second law of motion: concept of a system  (Page 5/14) Page 5 / 14 ## What rocket thrust accelerates this sled? Prior to manned space flights, rocket sleds were used to test aircraft, missile equipment, and physiological effects on human subjects at high speeds. They consisted of a platform that was mounted on one or two rails and propelled by several rockets. Calculate the magnitude of force exerted by each rocket, called its thrust $\mathbf{\text{T}}$ , for the four-rocket propulsion system shown in [link] . The sled’s initial acceleration is $\text{49}\phantom{\rule{0.25em}{0ex}}{\text{m/s}}^{2},$ the mass of the system is 2100 kg, and the force of friction opposing the motion is known to be 650 N. Strategy Although there are forces acting vertically and horizontally, we assume the vertical forces cancel since there is no vertical acceleration. This leaves us with only horizontal forces and a simpler one-dimensional problem. Directions are indicated with plus or minus signs, with right taken as the positive direction. See the free-body diagram in the figure. Solution Since acceleration, mass, and the force of friction are given, we start with Newton’s second law and look for ways to find the thrust of the engines. Since we have defined the direction of the force and acceleration as acting “to the right,” we need to consider only the magnitudes of these quantities in the calculations. Hence we begin with ${F}_{\text{net}}=\text{ma},$ where ${F}_{\text{net}}$ is the net force along the horizontal direction. We can see from [link] that the engine thrusts add, while friction opposes the thrust. In equation form, the net external force is ${F}_{\text{net}}=4T-f.$ Substituting this into Newton’s second law gives ${F}_{\text{net}}=\text{ma}=4T-f.$ Using a little algebra, we solve for the total thrust 4 T : $4T=\text{ma}+f.$ Substituting known values yields $4T=\text{ma}+f=\left(\text{2100 kg}\right)\left({\text{49 m/s}}^{2}\right)+\text{650 N}.$ So the total thrust is $4T=1.0×{\text{10}}^{5}\phantom{\rule{0.25em}{0ex}}\text{N},$ and the individual thrusts are $T=\frac{1.0×{\text{10}}^{5}\phantom{\rule{0.25em}{0ex}}\text{N}}{4}=2.6×{\text{10}}^{4}\phantom{\rule{0.25em}{0ex}}\text{N}.$ Discussion The numbers are quite large, so the result might surprise you. Experiments such as this were performed in the early 1960s to test the limits of human endurance and the setup designed to protect human subjects in jet fighter emergency ejections. Speeds of 1000 km/h were obtained, with accelerations of 45 $g$ 's. (Recall that $g$ , the acceleration due to gravity, is $9\text{.}{\text{80 m/s}}^{2}$ . When we say that an acceleration is 45 $g$ 's, it is $\text{45}×9\text{.}{\text{80 m/s}}^{2}$ , which is approximately ${\text{440 m/s}}^{2}$ .) While living subjects are not used any more, land speeds of 10,000 km/h have been obtained with rocket sleds. In this example, as in the preceding one, the system of interest is obvious. We will see in later examples that choosing the system of interest is crucial—and the choice is not always obvious. When using the Conservation of Energy equation, do we substitute the energy as a negative quantity when the energies on a single object are exerting forces opposite to one another? Ex. On an inclined plane, gravitational potential energy, friction energy/work and spring potential energy. (Let's say that the spring is keeping the box from sliding down the slope.) How do we use this in the equation? I'm so confused Jennifer Oh! And if there's kinetic energy that is exerting a force opposite to the spring, what do we do? Jennifer why is it dat when using double pan balance the known and unknown mass are the same discuss the uses of energy in the following sectors of economy security and education amajuoyi is there more then 4 dimensions hii princy hi Miguel hello I kinda need help in physics... a lot Brown Brown. what kind of help Jeff when it comes to physics stick with the basics don't overthink things Jeff yes ayesha sticking to the basics will take you farther than overwhelming yourself with more than you need to physics is simple keep it simple Jeff thk u Ayesha Jeff for real....? so I've got to know the fundamentals and use the formula to solve any problem Brown read Stephan hawkings a brief history of time ayesha ayesha physics isn't hard it's just understanding and applying the formulas if u need help ask any question ayesha okay...because I've got an exam next year February a Computer based exam Brown ayesha ayesha hi Varun how can we find absolute uncertainty it what? Luke in physics ayesha the basic formula is uncertainty in momentum multiplied buy uncertainty In position is greater than or equal to 4×pi/2. same formula for energy and time Luke I have this one question can you please look it up it's 9702/22/O/N/17 Question 1 B 3 ayesha what uma would you like physics? Suthar yes farooq precision or absolute uncertainty is always equal to least count of that instrument Iram how do I unlock the MCQ and the Essay? what is the dimension of strain Is there a formula for time of free fall given that the body has initial velocity? In other words, formula for time that takes a downward-shot projectile to hit the ground. Thanks! hi Agboro hiii Chandan Hi Sahim hi Jeff hey Priscilla sup guys Bile Hy Kulsum What is unit of watt? Kulsum watt is the unit of power Rahul p=f.v Rahul watt can also be expressed as Nm/s Rahul what s i unit of mass Maxamed SI unit of mass is Kg(kilogram). Robel what is formula of distance Maxamed Formula for for the falling body with initial velocity is:v^2=v(initial)^2+2*g*h Mateo i can't understand Maxamed we can't do this calculation without knowing the height of the initial position of the particle Chathu sorry but no more in science Imoreh 2 forces whose resultant is 100N, are at right angle to each other .if one of them makes an angle of 30 degree with the resultant determine it's magnitude 50 N... (50 *1.732)N Sahim Plz cheak the ans and give reply.. Sahim 50 N...(50 *1.732)N Ibrahim show the working usiomon what is the value of f1 and f2 Syed what is the value of force 1 and force 2. Syed . Is earth is an inertial frame? The abacus (plural abaci or abacuses), also called a counting frame, is a calculating tool that was in use in Europe, China and Russia, centuries before the adoption of the written Hindu–Arabic numeral system Sahim thanks Irungu Most welcome Sahim Hey.. I've a question. Is earth inertia frame? Sahim only the center Shii What is an abucus? Irungu what would be the correct interrogation "what is time?" or "how much has your watch ticked?" prakash a load of 20N on a wire of cross sectional area 8×10^-7m produces an extension of 10.4m. calculate the young modules of the material of the wire is of length 5m Young's modulus = stress/strain strain = extension/length (x/l) stress = force/area (F/A) stress/strain is F l/A x El so solve it Ebenezer Ebenezer two bodies x and y start from rest and move with uniform acceleration of a and 4a respectively. if the bodies cover the same distance in terms of tx and ty what is the ratio of tx to ty what is cesium atoms? The atoms which form the element Cesium are known as Cesium atoms. Naman A material that combines with and removes trace gases from vacuum tubes. Shankar what is difference between entropy and heat capacity Varun Heat capacity can be defined as the amount of thermal energy required to warm the sample by 1°C. entropy is the disorder of the system. heat capacity is high when the disorder is high. Chathu I want learn physics sir how to understanding clearly Vinodhini try to imagine everything you study in 3d revolutionary pls give me one title Vinodhini displacement acceleration how understand Vinodhini vernier caliper usage practically Vinodhini karthik sir is there Vinodhini what are the solution to all the exercise..?
proofpile-shard-0030-308
{ "provenance": "003.jsonl.gz:309" }
## Dynamics of postcritically bounded polynomial semigroups I: connected components of the Julia sets Sumi, Hiroki ##### Description We investigate the dynamics of semigroups generated by a family of polynomial maps on the Riemann sphere such that the postcritical set in the complex plane is bounded. The Julia set of such a semigroup may not be connected in general. We show that for such a polynomial semigroup, if $A$ and $B$ are two connected components of the Julia set, then one of $A$ and $B$ surrounds the other. From this, it is shown that each connected component of the Fatou set is either simply or doubly connected. Moreover, we show that the Julia set of such a semigroup is uniformly perfect. An upper estimate of the cardinality of the set of all connected components of the Julia set of such a semigroup is given. By using this, we give a criterion for the Julia set to be connected. Moreover, we show that for any $n\in \Bbb{N} \cup \{\aleph_{0}\} ,$ there exists a finitely generated polynomial semigroup with bounded planar postcritical set such that the cardinality of the set of all connected components of the Julia set is equal to $n.$ Many new phenomena of polynomial semigroups that do not occur in the usual dynamics of polynomials are found and systematically investigated. Comment: Published in Discrete and Continuous Dynamical Systems - Series A, Vol. 29, No. 3, 2011, 1205--1244. 39 pages, 2 figures. Some typos are fixed. See also http://www.math.sci.osaka-u.ac.jp/~sumi/ ##### Keywords Mathematics - Dynamical Systems, Mathematics - Complex Variables, Mathematics - Geometric Topology, Mathematics - Probability, 37F10, 30D05
proofpile-shard-0030-309
{ "provenance": "003.jsonl.gz:310" }
# Raffle contract This section presents the Archetype version of a raffle contract, inspired by the version presented for other languages (Ligo, Smartpy). The difference is that it uses the timelock feature to securize the winning ticket picking process. A raffle is a gambling game, where players buy tickets; a winning ticket is randomly picked and its owner gets the jackpot prize. The Michelson language does not provide an instruction to generate a random number. We can't use the current date (value of now) as a source of randomness either. Indeed, bakers have some control on this value for the blocks they produce, and could therefore influence the result. ##### info The source code of the raffle contract is available in this repository. ## Picking the winning ticket​ The winning ticket id is obtained as the remainder of the euclidean division of an arbitraly large number, called here the raffle key, by the number of ticket buyers, called here players. For example, if the raffle key is 2022, and the number of raffle players is 87, then the winning ticket id is 21 (typically the 21st ticket). The constraint is that this raffle key must not be known by anyone, nor the players or even the admin. Indeed if someone knows in advance the raffle key, it is then possible to influence the outcome of the game by buying tickets until one of them is the winning one (there is only one ticket per address, but someone can have several addresses). As a consequence: • the raffle key cannot be simply stored in the contract. • the raffle key cannot be a secret that only the admin knows (for the reason above), and that the admin would pass to the contract when it is time to announce the winner. Indeed, the admin could disappear, and no winner would ever be announced. For the admin not to be the only one to know the key, each player must possess a part of the key (called here partial key), such that the raffle key is the sum of each player's partial key. For the player's partial key not to be known by the other players, it must be cyphered by the player. When it comes to selecting the winning ticket, the user is required to reveal its key for the contract to compute the raffle key. However, a player could influence the outcome by not revealing the partial key. It is then necessary that the encrypted partial key can be decrypted by anyone at some time. A reward is sent to the account that reveals a key. The timelock encryption feature of the Michelson chest data type provides the required property: a timelocked value is encrypted strongly enough that even the most powerful computer will take more than a certain amount of time to crack it, but weakly enough that given a bit more time, any decent computer will manage to crack it. That is to say that, beyond a certain amount of time, the value may be considered public. ## Raffle storage​ The contract is originated with the following parameters: • owner is the address of the contract administrator • jackpot is the prize in tez • ticket_price is the price in tez of a ticket archetype raffle( owner : address, jackpot : tez, ticket_price : tez) ### State​ The contract holds: • a state with 3 possible values: • Created is the initial state during which tickets cannot be bought yet • Initialised is the state when the administrator initialises the raffle • Transferred is the state when prize has been transferred to the winner states =| Created initial| Initialised| Transferred • the open date beyond which tickets can be bought, initialized to none • the date beyond which tickets cannot be bought, initialized to none variable open_buy : option<date> = nonevariable close_buy : option<date> = none The schema below illustrates the periods defined by these dates, and the contract's states: ### Other​ The contract also holds: • the reveal fee, initialized to none: variable reveal_fee : option<rational> = none • the time used to generate the timelocked value of the raffle key (it should be high enough to be compliant with the close date), initialized to none: variable chest_time : option<nat> = none • a collection that will contain the addresses of all players and their raffle key: asset player { id : address; locked_raffle_key : chest; (* partial key *) revealed : bool = false;} • the raffle key, updated when a player's partial key is revealed: variable raffle_key : nat = 0 ## Entrypoints​ ### initialise​ The initialise entrypoint is called by the contract admin (called "owner") to set the main raffle parameters: • open buy is the date beyond which players can buy ticket • close buy is the date beyond which players cannot buy ticket • chest time is the difficulty to break players' partial raffle key encryption • reveal fee the pourcentage of ticket price transferred when revealing a player's raffle key ##### info Currently you may count from a chest time of 500 000 per second on a standard computer, to a chest time value of 500 000 000 per second on dedicated hardware. It requires that: • the open and close dates be consistent • the reveal fee be equal to or less than 1 • the transferred amount of tez be equal to the jackpot storage value It transitions from Created state to Initialised, and sets the raffle parameters. transition initialise(ob : date, cb : date, t : nat, rf : rational) { called by owner require { r0 : now <= ob < cb otherwise "INVALID_OPEN_CLOSE_BUY"; r2 : rf <= 1 otherwise "INVALID_REVEAL_FEE"; r3 : transferred = jackpot otherwise "INVALID_AMOUNT" } from Created to Initialised with effect { open_buy := some(ob); close_buy := some(cb); chest_time := some(t); reveal_fee := some(rf) }} ### buy​ The buy entrypoint may be called by anyone to buy a ticket. The player must transfer the encrypted value of the partial raffle key, so that the partial key value may be potentially publically known when it comes to declaring the winner ticket. It requires that: • the contract be in Initialised state • the transferred amount of tez be equal to the ticket price • the close date not be reached It records the caller's address in the player collection. entry buy (lrk : chest) { state is Initialised require { r4 : transferred = ticket_price otherwise "INVALID_TICKET_PRICE"; r5 : opt_get(open_buy) < now < opt_get(close_buy) otherwise "RAFFLE_CLOSED" } effect { player.add({ id = caller; locked_raffle_key = lrk }) }} ##### info Note that the add method fails with (Pair "KeyExists" "player") if the caller is already in the collection. ### reveal​ The reveal entry point may be called by anyone to reveal a player's partial key and contribute to the computation of the raffle key. The caller receives a percentage of the ticket price as a reward. It requires that: • the contract be in Initialised state • the date is valid is beyond close_buy entry reveal(addr : address, k : chest_key) { state is Initialised require { r6 : opt_get(close_buy) < now otherwise "RAFFLE_OPEN"; r7 : not player[addr].revealed otherwise "PLAYER_ALREADY_REVEALED" } effect { match open_chest(k, player[addr].locked_raffle_key, opt_get(chest_time)) with | left (unlocked) -> match unpack<nat>(unlocked) with | some(partial_key) -> raffle_key += partial_key; player[addr].revealed := true | none -> player.remove(addr) end | right(open_error) -> if open_error then fail("INVALID_CHEST_KEY") else player.remove(addr) end; transfer (opt_get(reveal_fee) * ticket_price) to caller; }} Note that the player addr may be removed in 2 situations: 1. the chest key opens the chest but is unable to decypher the content; this is the case if for example the chest was not generated with the correct chest time value 2. the chest is decyphered properly, but it does not contain an integer value Note at last that in all cases, the caller is rewarded for the chest key when it is valid. ### transfer​ When all players have been revealed, anyone can call the transfer entrypoint to transfer the jackpot to the the winning ticket. It transitions to Transferred state: transition %transfer() { require { r8: player.select(the.revealed).count() = player.count() otherwise "EXISTS_NOT_REVEALED" } from Initialised to Transferred with effect { transfer balance to player.nth(raffle_key % player.count()); }}
proofpile-shard-0030-310
{ "provenance": "003.jsonl.gz:311" }
# Graph executor ## Introduction Analysis apps on the VisionAppster platform are built using a data flow graph (DFG). A data flow graph consists of nodes that can send and receive messages. The edges of the graph describe dependencies between nodes. Each node in the graph processes its input arguments and produces some output arguments as results. The output arguments can then be passed to other nodes in the graph. Unlike in imperative programming, the execution order of graph nodes is not fully defined in advance. The system is free to choose any order as far as the dependencies are satisfied. This makes it possible to execute many nodes in the graph simultaneously. As most cool things in computer science, the idea of using data flow graphs to build programs is nothing new. Since 1960's, many programming languages and APIs have implemented the paradigm in a form or another. The VisionAppster platform neither introduces a new data flow programming language nor requires one to build programs using data flow primitives from the ground up. Instead, a hybrid approach is taken: the nodes – a.k.a tools – are relatively complex and can be written with a traditional programming language such as C, C++ or anything else that provides a C interface. This makes it possible to write algorithms in a traditional way but still benefit from parallelization on a higher level. Consider the simple example below. The image sent by an image source is processed by two tools and the results are subtracted from each other. If you were using a traditional image processing library, the code you write would be something like this (pseudo-code): def run(): while True: yield image_diff(filtered, mapped)
proofpile-shard-0030-311
{ "provenance": "003.jsonl.gz:312" }
# Projectile hitting the incline plane horizontally Tags: 1. Jun 19, 2016 ### Saurav7 1. The problem statement, all variables and given/known data A projectile is thrown at angle θ with an inclined plane of inclination 45o . Find θ if projectile strikes the inclined the plane horizontal 2. Relevant equations Taking x-axis along the incline and y-axis perpendicular to incline. Vx=ucosθ - gsint(45)t Vy=usinθ - gcos(45)t These are the velocities after time t. Vx=Velocity along the plane after time t. Vy=Velocity perpendicular to plane after time t. 3. The attempt at a solution As at the time of horizontal collision with the incline. The projectile will make an angle of 45o with the incline and hence the velocity of projectile V=Vcos45i + Vsin45j which means the x and y components of velocity are equal (as sin45=cos45). And hence I applied the condition that Vx=Vy that gives us: ucosθ - gsint(45)t = usinθ - gcos(45)t => ucosθ-usinθ = gsin(45)t-gcos(45)t =>u(cosθ-sinθ) = gt(sin45-cos45) =>u(cosθ-sinθ)=0 (as sin45=cos45) =>cosθ-sinθ=0 =>cosθ=sinθ => Tanθ=1 which gives θ=45o But the answer is θ = tan-1(2) - 45o I know how the right solution came but I am more bothered about what was wrong in my attempt that I got it incorrect. #### Attached Files: • ###### IMG_20160619_100908.jpg File size: 52.2 KB Views: 94 Last edited: Jun 19, 2016 2. Jun 19, 2016 ### tommyxu3 This is right, but mint the same quality is their "magnitude" so you must put $|u\cos\theta - g\sin\frac{\pi}{2}t |= |u\sin\theta - g\cos\frac{\pi}{2}t|.$ Then the answer is one of the condition (throwing upwards). 3. Jun 19, 2016 ### Saurav7 I can't solve it further.Can you help me a bit further? 4. Jun 19, 2016 ### tommyxu3 You will get $u(\cos\theta+\sin\theta)=\sqrt{2}gt,$ (of course the other direction of your original one), but you cannot solve it with just this. My hint is to observe that the object finally falls on the plane again. 5. Jun 19, 2016 ### Saurav7 Thanks a lot brother. I got u(cosθ+sinθ)=√2gt and this happens at t=Time of flight so we get t=T=2usinθ/(gcos45) Putting in the values I got 3sinθ+cosθ=0 Then dividing by cosθ on both sides,we get; 3tanθ+1=0 and finally θ=arctan(-1/3) I dont know if this is the right result. 6. Jun 19, 2016 ### tommyxu3 Plugging the value you may get $u(\cos\theta+\sin\theta)=\sqrt{2}gt=4u\sin\theta,$ so it must be $\cos\theta=3\sin\theta$ instead of what you get. You can get $\arctan \frac{1}{3}=\arctan{2}-\frac{\pi}{4}$ by some trigonometric knowledge. 7. Jun 19, 2016 ### Saurav7 Man,you are a life saver to me. I was stuck on this since yesterday night :D . Thanks a lot,a lot. :D Is there any other way to connect with you in case I get stuck again to some other question :P Like can I PM you or something :D Again,Thanks a lot. :) 8. Jun 19, 2016 ### tommyxu3 You can similarly discuss with everyone here like just now~ or maybe you can send me messages in the conversations directly also haha.
proofpile-shard-0030-312
{ "provenance": "003.jsonl.gz:313" }
# Revision history [back] Hi, when this happens you can update everything else with : yum update --skip-broken and you always have the kmod in updates-testing , so you can do: yum --enablerepo=rpmfusion-free-updates-testing update kernel kernel before enter in updates enter in updates-testing and rpmfusion-free-updates-testing also build kmods for kernel in testing . Hi, when this happens you can update everything else with : yum update --skip-broken and you always have the kmod in updates-testing , so you can do: yum --enablerepo=rpmfusion-free-updates-testing update kernel kernel before enter in updates enter in updates-testing and rpmfusion-free-updates-testing also build kmods for kernel in testing . in reply: rpm -q kmod-VirtualBox kmod-VirtualBox-4.3.10-1.fc20.2.x86_64 kmod-VirtualBox is a meta-package, which sole purpose is to require the VirtualBox kernel module(s) for the newest kernel. To make sure you get it together with a new kernel. so if you have kmod-VirtualBox installed you will need use --skip-broken and you should IMHO. Hi, when this happens you can update everything else with : yum update --skip-broken and you always have the kmod in updates-testing , so you can do: yum --enablerepo=rpmfusion-free-updates-testing update kernel kernel before enter in updates enter in updates-testing and rpmfusion-free-updates-testing also build kmods for kernel in testing . rpm -q kmod-VirtualBox
proofpile-shard-0030-313
{ "provenance": "003.jsonl.gz:314" }
# Taking a Moment… in Calculus In calculus, I’ve historically asked kids to take the derivative of: $f(x)=\frac{2x^2+\sqrt{x}}{\sqrt{x}}$ and students will immediately go to the quotient rule. OBVIOUSLY! There’s a numerator and denominator. Duh. So go at it! Unfortunately, this is VERY UNWISE because it leads to a lot more work. And I was sick of my kids not taking a moment to think: what are my options, and what might be the best option available? Also, kids generally found it hard to deal when we started mixing the derivative rules up! So I came up with a sheet to address this and paired kids to work on it. (I’ve also had kids think they can do some crazy algebra with $g(x)=\frac{x^2+1}{x+1}$. This sheet also helped me talk with kids individually about that.) For a little context, my kids have only learned the power rule, the product rule, the quotient rule, and that the derivative of $e^x$ is $e^x$. They have not yet been formally exposed to the chain rule. [.pdf, .doc] 1. This is a really good exercise. I’ve done something like this with my Calc 2 students this semester with series tests. I’d give the students an infinite series, then ask them to determine whether it converges or diverges using the method of their choice. Then I’d ask them to do it again, using a different method. On the exam covering series they were given a choice of 2 out of 8 series to determine convergence/divergence and then asked to choose one of those two and do it differently. Eventually they got to the point where the first thing they’d do when they see a series is think through all the options first, THEN decide which option is best and go with it. I’d also add that this is a great kind of question to do with clickers (or little whiteboards). Example: Given f(x) = (x^2 + x)/(sqrt(x)). The BEST way to take the derivative of this function is (a) Take derivative of the top and the derivative of the bottom (b) Simplify the algebra and use the Power Rule (c) Quotient Rule (d) Product Rule (e) None of the above Gets some interesting discussion going as to what we might mean by “best”. 1. OMG I love that idea (multiple choice) to start out the class!!! Totally stealing that next year. 2. Great stuff, Sam! I’m at just the same moment in my calculus class where most all of the derivative rules are on the table and the synthesizing needs to start happening. I used your second through fourth pages *today*, untouched, preceded by this warm-up that was inspired by Robert’s comment: I figure that offering those back is the best way for me to say thanks and hooray! 1. And you know once I introduce the chain rule I will be using your warm up!!! It’s pretty fantastic! Not pretty fantastic — TOTALLY fantastic! Thank you! Hey, do you do anything special/interesting/investigative/intuitive to build up to the chain rule? Or introduce it? SAM 2. Jim Doherty says: Love this exercise Sam. I have been having multiple conversations with my Calc class this year about the benefits of cleverness and diligence/persistence. This is the first year I have taught non-AP Calculus and the difference in their cleverness is striking. My honors kids are willing and (mostly) able to go through procedures accurately, but they do not see where they can save energy and think rather than do very well. An example from a recent quiz was to find the derivative of (x^5 + 1)/ (x + 1) To me this just screams out to divide and simplify first, none of my 23 kids saw that… How do we instill this instinct to analyze before diving in and just DOING the work in front of them? 1. Well, I know none of my kids would see that either (though my first instinct would also be to divide it out). I don’t know the full answer to your question, but I suppose the answer would include (a) exercises like this worksheet, (b) modeling multiple approaches and discussing the pros and cons, and (c) allowing the messiness of math in the classroom (and finding ways to encourage messiness). But yeah, easier said than done. (I don’t do it.) 3. Jim Doherty says: One of my strategies which works just a little bit is to make an encouraging scene in class whenever anyone comes up with these clever time saving strategies whether it is on an assessment or in class convos. I guess that after spending as much time as we do with limits and looking for ways to rewrite expressions such as the one I offered as an example, I WANT them to carry those instincts over to other problems. I know how difficult this type of transfer of knowledge is, but my hope springs eternal that I can structure our class conversations in such a way as to open this kind of thinking for my students. So often they take the hard way and get hopelessly lost in the maze of Algebra that this approach creates. 4. jg says: Not being a math teacher, I’ve always wanted to know… why does anyone teach the quotient rule? I never learned it myself – it was obviously a special case of the product rule, and is more complicated to remember, so I ignored it and never ran into any case where that was an issue. Is there a deeper truth that I’m missing? 1. The Quotient Rule can always be replaced by the Product Rule, but there is a catch — you also have to do the Chain Rule. So it’s a question of whether it’s better to employ one rule that is sort of complicated (Quotient) or two rules, one of which is not that complicated and the second of which often gives students fits. Kind of gets back to that notion I mentioned about the “best” way of doing something in calculus. 1. jg says: You can’t really do much in calculus (or physics) without understanding the chain rule, so maybe the extra practice is a plus, then! 5. Jim Doherty says: jg I am kind of with you on this. I tend to avoid it and just use a combination of the chain rule and the product rule. The algebra of the quotient rule is often hideous 6. Just a quick note about Example #2 on your worksheet. Since the denominator is a very simple linear expression, what about actually dividing the numerator by the denominator, and getting a polynomial (easy) with a simpler “remainder term” (quotient rule, but simpler than doing it on the original)? It may not be anyone’s preferred method, but I think it might be worth mentioning. (It also might be nice to have an example with three options!) 1. Good point – similar to what Jim said above with (x^5+1)/(x+1). Totes. We teach them polynomial division in the curriculum and then they never really see it in action except for here and there… 7. bigbird4217 says: What do you recommend if I want to steal your handouts? Seems I can’t download or cleanly copy your docs from within the posts. I promise to cite… 1. I have uploaded under the embedded PDF both .doc and .pdf documents. They should work. If they aren’t, I’m flummoxed as to why they aren’t working…
proofpile-shard-0030-314
{ "provenance": "003.jsonl.gz:315" }
# Difference between revisions of "Dynamical Forms of Dark Energy" ## The Quintessence The cosmological constant represents nothing but the simplest realization of the dark energy - the hypothetical substance introduced to explain the accelerated expansion of the Universe. There is a dynamical alternative to the cosmological constant - the scalar fields, formed in the post-inflation epoch. The most popular version is the scalar field $\varphi$ evolving in a properly designed potential $V(\varphi)$. Numerous models of such type differ by choice of the scalar field Lagrangian. The simplest model is the so-called quintessence. In antique and medieval philosophy this term (literally "the fifth essence", after the earth, water, air and fire) meant the concentrated extract, the creative force, penetrating all the material world. We shall understand the quintessence as the scalar field in a potential, minimally coupled to gravity, i.e. feeling only the influence of space-time curvature. Besides that we restrict ourselves to the canonic form of the kinetic energy. The action for fields of such type takes the form $S=\int d^4x \sqrt{-g}\; L=\int d^4x \sqrt{-g}\left[\frac12g^{\mu\nu}\frac{\partial\varphi}{\partial x^\mu} \frac{\partial\varphi}{\partial x^\nu}-V(\varphi)\right].$ The equations of motion for the scalar field are obtained as usual, by variation of the action with respect to the field (see Chapter "Inflation"). ### Problem 1 Obtain the Friedman equations for the case of flat Universe filled with quintessence. ### Problem 2 Obtain the general solution of the Friedman equations for the Universe filled with free scalar field, $V(\varphi)=0$. ### Problem 3 Show that in the case of Universe filled with non-relativistic matter and quintessence the following relation holds: $\dot H=-4\pi G(\rho_m+\dot\varphi^2).$ ### Problem 4 Show that in the case of Universe filled with non-relativistic matter and quintessence the Friedman equations $H^2=\frac{8\pi G}{3}\left[\rho_m+\frac12\dot\varphi^2+V(\varphi)\right],$ $\dot H =-4\pi G(\rho_m+\dot\varphi^2)$ can be transformed to the form $\frac{8\pi G}{3H_0^2}\left(\frac{d\varphi}{dx}\right)^2=\frac{2}{3H_0^2x}\frac{d\ln H}{dx}-\frac{\Omega_{m0}x}{H^2};$ $\frac{8\pi G}{3H_0^2}V(x)=\frac{H^2}{H_0^2}-\frac{x}{6H_0^2}\frac{d H^2}{dx}-\frac12\Omega_{m0}x^3;$ $x\equiv1+z.$ ### Problem 5 Show that the conservation equation for quintessence can be obtained from the Klein-Gordon equation $\ddot\varphi+3H\dot\varphi+\frac{dV}{d\varphi}=0.$ ### Problem 6 Find the explicit form of Lagrangian describing the dynamics of the Universe filled with the scalar field in potential $V(\varphi)$. Use it to obtain the equations of motion for the scale factor and the scalar field. ### Problem 7 In the flat Universe filled with scalar field $\varphi$ obtain the isolated equation for $\varphi$ only. (See S.Downes, B.Dutta, K.Sinha, arXiv:1203.6892) ### Problem 8 What is the reason for the requirement that the scalar field's evolution in the quintessence model is slow enough? ### Problem 9 Find the potential and kinetic energies for quintessence with the given state parameter $w$. ### Problem 10 Find the dependence of state equation parameter $w$ for scalar field on the quantity $x=\frac{\dot\varphi^2}{2V(\varphi)}$ and determine the ranges of $x$ corresponding to inflation in the slow-roll regime, matter-dominated epoch and the rigid state equation ($p\sim\rho$) limit correspondingly. ### Problem 11 Show that if kinetic energy $K=\dot\varphi^2/2$ of a scalar field is initially much greater than its potential energy $V(\varphi)$, then it will decrease as $a^{-6}$. ### Problem 12 Show that the energy density of a scalar field $\varphi$ behaves as $\rho_\varphi\propto a^{-n}$, $0\le n\le6$. ### Problem 13 Show that dark energy density with the state equation $p=w(a)\rho(a)$ can be presented as a function of scale factor in the form $\rho=\rho_0 a^{-3[1+\bar w(a)]},$ where $\bar w(a)$ is the parameter $w$ averaged in the logarithmic scale $$\bar w(a) \equiv \frac{\int w(a)d\ln a}{\int {d\ln a} }.$$ ### Problem 14 Consider the case of Universe filled with non-relativistic matter and quintessence with the state equation $p=w\rho$ and show that the first Friedman equation can be presented in the form $H^2(z)=H_0^2\left[\Omega_{m0}(1+z)^3+(1-\Omega_{m0})e^{3\int_0^z\frac{dz'}{1+z'}(1+w(z'))}\right].$ ### Problem 15 Show that for the model of the Universe considered in the previous problem the state equation parameter $w(z)$ can be presented in the form $w(z)=\frac{\frac23(1+z)\frac{d\ln H}{dz}-1}{1-\frac{H_0^2}{H^2}\Omega_{m0}(1+z)^3}.$ ### Problem 16 Show that the result of the previous problem can be presented in the form $w(z)=-1+(1+z)\frac{2/3E(z)E'(z)-\Omega_{m0}(1+z)^2}{E^2(z)-\Omega_{m0}(1+z)^3},\quad E(z)\equiv\frac{H(z)}{H_0}.$ ### Problem 17 Show that decreasing of the scalar field's energy density with increasing of the scale factor slows down as the scalar field's potential energy $V(\varphi)$ starts to dominate over the kinetic energy density $\dot\varphi^2/2$. ### Problem 18 Express the time derivative $\dot\varphi$ through the quintessence' density $\rho_\varphi$ and the state equation parameter $w_\varphi$. ### Problem 19 Estimate the magnitude of the scalar field variation $\Delta\varphi$ during time $\Delta t$. ### Problem 20 Show that in the radiation-dominated or matter-dominated epoch the variation of the scalar field is small, and the measure of its smallness is given by the relative density of the scalar field. ### Problem 21 Show that in the quintessence $(w>-1)$ dominated Universe the condition $\dot{H}<0$ always holds. ### Problem 22 Consider simple bouncing solution of Friedman equations that avoid singularity. This solution requires positive spatial curvature $k=+1$, negative cosmological constant $\Lambda<0$ and a "matter" source with equation of state $p=w\rho$ with $w$ in the range $-1<w<-\frac13.$ In the special case $w=-2/3$ Friedman equations describe a constrained harmonic oscillator (a simple harmonic Universe). Find the corresponding solutions. (Inspired by P.Graham et al. arXiv:1109.0282) ### Problem 23 Derive the equation for the simple harmonic Universe (see previous problem), using the results of problem #DE04. ### Problem 24 Barotropic liquid is a substance for which pressure is a single--valued function of density. Is quintessence generally barotropic? ### Problem 25 Show that a scalar field oscillating near the minimum of potential is not a barotropic substance. ### Problem 26 For a scalar field $\varphi$ with state equation $p=w\rho$ and relative energy density $\Omega_\varphi$ calculate the derivative $w'=\frac{dw}{d\ln a}.$ ### Problem 27 Calculate the sound speed in the quintessence field $\varphi(t)$ with potential $V(\varphi)$. ### Problem 28 Find the dependence of quintessence energy density on redshift for the state equation $p_{DE}=w(z)\rho_{DE}$. ### Problem 29 The equation of state $p=w(a)\rho$ for quintessence is often parameterized as $w(a)=w_0 + w_1(1-a)$. Show that in this parametrization energy density and pressure of the scalar field take the form: $$\rho(a) \propto a^{-3[1+w_{\it eff}(a)]},\quad p(a) \propto (1+w_{\it eff}(a))\rho(a),$$ where $$w_{\it eff}(a)=(w_0+w_1)+(1-a)w_1/\ln a.$$ ### Problem 30 Find the dependence of Hubble parameter on redshift in a flat Universe filled with non-relativistic matter with current relative density $\Omega_{m0}$ and dark energy with the state equation $p_{DE}=w(z)\rho_{DE}$. ### Problem 31 Show that in a flat Universe filled with non--relativistic matter and arbitrary component with the state equation $p=w(z)\rho$ the first Friedman equation can be presented in the form: $w(z)=-1+\frac13\frac{d\ln(\delta H^2/H_0^2)}{d\ln(1+z)},$ where $\delta H^2 = H^2 - \frac{8\pi G}{3}\rho_m$ describes the contribution into the Universe's expansion rate of all components other than matter. ### Problem 32 Express the time derivative of a scalar field through its derivative with respect to redshift $d\varphi/dz.$ ### Problem 33 Show that the particle horizon does not exist for the case of quintessence because the corresponding integral diverges (see Chapter 2(3)). ### Problem 34 Show that in a Universe filled with quintessence the number of observed galaxies decreases with time. ### Problem 35 Let $t$ be some time in the distant past $t\ll t_0$. Show that in a Universe dominated by a substance with state parameter $w>-1$ the current cosmic horizon (see Chapter 3) is $R_h(t_0)\approx\frac32(1+\langle w\rangle)t_0,$ where $\langle w\rangle$ is the time-averaged value of $w$ from $t$ to the present time $\langle w\rangle\equiv\frac{1}{t_0}\int\limits_t^{t_0} w(t)dt.$ ### Problem 36 From WMAP$^*$ observations we infer that the age of the Universe is $t_0\approx13.7\cdot10^9$ years and cosmic horizon equals to $R_h(t_0)=H_0^{-1}\approx13.5\cdot10^9$ light-years. Show that these data imply existence of some substance with equation of state $w<-1/3$, - "dark energy". $^*$ Wilkinson Microwave Anisotropy Probe is a spacecraft which measures differences in the temperature of the Big Bang's remnant radiant heat - the cosmic microwave background radiation - across the full sky. ### Problem 37 The age of the Universe today depends upon the equation-of-state of the dark energy. Show that the more negative parameter $w$ is, the older Universe is today. ### Problem 38 Consider a Universe filled with dark energy with state equation depending on the Hubble parameter and its derivatives, $p=w\rho+g(H,\dot H, \ddot H,\ldots,;t).$ What equation does the Hubble parameter satisfy in this case? ### Problem 39 Show that taking function $g$ (see the previous problem) in the form $g(H,\dot H, \ddot H)=-\frac{2}{\kappa^2}\left(\ddot H + \dot H + \omega_0^2 H + \frac32(1+w)H^2-H_0\right),\ \kappa^2=8\pi G$ leads to the equation for the Hubble parameter identical to the one for the harmonic oscillator, and find its solution. ### Problem 40 Find the time dependence of the Hubble parameter in the case of function $g$ (see problem #DE64) in the form $g(H;t)= -\frac{2\dot f(t)}{\kappa^2f(t)}H,\ \kappa^2=8\pi G$ where $f(t)=-\ln(H_1+H_0\sin\omega_0t)$, $H_1>H_0$ is arbitrary function of time. ### Problem 41 Show that in an open Universe the scalar field potential $V[\varphi(\eta)]$ depends monotonically on the conformal time $\eta$. ### Problem 42 Reconstruct the dependence of the scalar field potential $V(a)$ on the scale factor basing on given dependencies for the field's energy density $\rho_\varphi(a)$ and state equation parameter $w(a)$. ### Problem 43 Find the quintessence potential providing the power law growth of the scale factor $a\propto t^p$, where the accelerated expansion requires $p>1$. ### Problem 44 Let $a(t)$, $\rho(t)$, $p(t)$ be solutions of Friedman equations. Show that for the case $k=0$ the function $\psi_n\equiv a^n$ is the solution of "Schr\"odinger equation" $\ddot\psi_n=U_n\psi_n$ with potential [see A.V.Yurov, arXiv:0905.1393] $U_n=n^2\rho-\frac{3n}{2}(\rho+p).$ ### Problem 45 Consider flat FLRW Universe filled with a scalar field $\varphi$. Show that in the case when $\varphi=\varphi(t)$, the Einstein equations with the cosmological term are reduced to the "Schrödinger equation" $\ddot\psi=3(V+\Lambda)\psi$ with $\psi=a^3$. Derive the equation for $\varphi(t)$ (see A.V.Yurov, arXiv:0305019). ### Problem 46 Consider FLRW space-time filled with non-interacting matter and dark energy components. Assume the following forms for the equation of state parameters of matter and dark energy $w_m=\frac{1}{3(x^\alpha+1)},\quad w_{DE}=\frac{\bar{w}x^\alpha}{x^\alpha+1},$ where $x=a/a_*$ with $a_*$ being some reference value of $a$, $\alpha$ is some positive constant and $\bar{w}$ is a negative constant. Analyze the dynamics of the Universe in this model. (see S.Kumar,L.Xu, arXiv:1207.5582) ## Tracker Fields A special type of scalar fields - the so-called tracker fields - was discovered at the end of the nineties. The term reflects the fact that a wide range of initial values for the fields of such type rapidly converges to the common evolutionary track. The initial values of energy density for such fields may vary by many orders of magnitude without considerable effect on the long-time asymptote. The peculiar property of tracker solutions is the fact that the state equation parameter for such a field is determined by the dominant component of the cosmological background. It should be stressed that, unlike the standard attractor, the tracker solution is not a fixed point (in the sense of a solution corresponding to the fixed point in a system of autonomous differential equations ): the ratio of the scalar field energy density to that of background component (matter or radiation) continuously changes as the quantity $\varphi$ descends along the potential. It is well desirable feature because we want the energy density $\varphi$ to exceed ultimately the background density and to transfer the Universe into the observed phase of the accelerated expansion. Below we consider a number of concrete realizations of the tracker fields. ### Problem 47 Show that initial value of the tracker field should obey the condition $\varphi_0=M_{Pl}$. ### Problem 48 Show that densities of kinetic and potential energy of the scalar field $\varphi$ in the potential of the form $V(\varphi)=M^4\exp(-\alpha\varphi M),\quad M\equiv\frac{M_{PL}^2}{16\pi}$ are proportional to the density of the concomitant component (matter or radiation) and therefore it realizes the tracker solution. ### Problem 49 Consider a scalar field potential $V(\varphi)=\frac A n\varphi^{-n},$ where $A$ is a dimensional parameter and $n>2$. Show that the solution $\varphi(t)\propto t^{2/(n+2)}$ is a tracker field under condition $a(t)\propto t^m$, $m=1/2$ or $2/3$ (either radiation or non-relativistic matter dominates). ### Problem 50 Show that the scalar field energy density corresponding to the tracker solution in the potential $V(\varphi)=\frac A n\varphi^{-n}$ (see the previous problem #DE73) decreases slower than the energy density of radiation or non-relativistic matter. ### Problem 51 Find the equation of state parameter $w_\varphi\equiv p_\varphi/\rho_\varphi$ for the scalar field of problem #DE73. ### Problem 52 Use explicit form of the tracker field in the potential of problem #DE73 to verify the value of $w_\varphi$ obtained in the previous problem. ## The K-essence Let us introduce the quantity $$X\equiv \frac{1}{2}{{g}^{\mu \nu }}\frac{\partial \varphi }{\partial {{x}^{\mu }}\frac{\partial \varphi }{\partial {{x}^{\nu }}}$$ and consider action for the scalar field in the form $$S=\int{{{d}^{4}}x\sqrt{-g}}\; L\left( \varphi ,X \right),$$ where Lagrangian $L$ is generally speaking an arbitrary function of variables $\varphi$ and $X.$ The dark energy model realized due to modification of the kinetic term with the scalar field, is called the $k$-essence. The traditional action for the scalar field corresponds to $$L\left( \varphi ,X \right)=X-V(\varphi ).$$ In the problems proposed below we restrict ourselves to the subset of Lagrangians of the form $$L\left( \varphi ,X \right)=K(X)-V(\varphi ),$$ where $K(X)$ is a positively defined function of kinetic energy $X$. In order to describe a homogeneous Universe we should choose $$X=\frac{1}{2}{\dot{\varphi}^{2}}.$$ ### Problem 53 Find the density and pressure of the $k$-essence. ### Problem 54 Construct the equation of state for the $k$-essence. ### Problem 55 Find the sound speed in the $k$-essence. ### Problem 56 The sound speed $c_s$ in any medium must satisfy two fundamental requirements: first, the sound waves must be stable and second, its velocity value should be low enough to preserve the causality condition. Therefore $0\le c_s^2\le1.$ Reformulate the latter condition in terms of scale factor dependence for the equation of state parameter $w(a)$ for the case of the $k$-essence. ### Problem 57 Find the state equation for the simplified $k$-essence model with Lagrangian $L=F(X)$ (the so-called pure kinetic $k$-essence). ### Problem 58 Find the equation of motion for the scalar field in the pure kinetic $k$-essence. ### Problem 59 Show that the scalar field equation of motion for the pure kinetic $k$-essence model gives the tracker solution. ## Phantom Energy The full amount of available cosmological observational data shows that the state equation parameter $w$ for dark energy lies in a narrow range near the value $w=-1$. In the previous subsections we considered the region $-1\le w\le-1/3$. The lower bound $w=-1$ of the interval corresponds to the cosmological constant, and all the remainder can be covered by the scalar fields with canonic Lagrangians. Recall that the upper bound $w=-1/3$ appears due to the necessity to provide the observed accelerated expansion of Universe. What other values of parameter $w$ can be used? The question is very hard to answer for the energy component we know so little about. General Relativity restricts possible values of the energy - momentum tensor by the so-called "energy conditions" (see Chapter 2). One of the simplest among them is the so-called Null Dominant Energy Condition (NDEC) $\rho+p\ge0$. The physical motivation of the latter is to avoid the vacuum instability. Applied to the dynamics of Universe, the NDEC requires that density of any allowed energy component cannot grow with the expansion of the Universe. The cosmological constant with $\dot\rho_\Lambda=0$, $\rho_\Lambda=const$ represents the limiting case. Because of our ignorance concerning the nature of dark energy it is reasonable to question whether this mysterious substance can differ from the already known "good" sources of energy and if it can violate the NDEC. Taking into account that dark energy must have positive density (it is necessary to make the Universe flat) and negative pressure (to provide the accelerated expansion of Universe), the violation of the NDEC must lead to $w<-1$. Such substance is called the phantom energy. The phantom field $\varphi$ minimally coupled to gravity has the following action: $S=\int d^4x \sqrt{-g}L=-\int d^4x \sqrt{-g}\left[\frac12g^{\mu\nu}\frac{\partial\varphi}{\partial x_\mu} \frac{\partial\varphi}{\partial x_\nu}+V(\varphi)\right],$ which differs from the canonic action for the scalar field only by the sign of the kinetic term. ### Problem 60 Show that the action of a scalar field minimally coupled to gravitation $S=\int d^4x\sqrt{-g}\left[\frac12(\nabla\varphi)^2-V(\varphi)\right]$ leads, under the condition $\dot\varphi^2/2<V(\varphi)$, to $w_\varphi<-1$, i.e. the field is phantom. ### Problem 61 Obtain the equation of motion for the phantom scalar field described by the action of the previous problem. ### Problem 62 Find the energy density and pressure of the phantom field. ### Problem 63 Show that the phantom energy density grows with time. Find the dependence $\rho(a)$ for $w=-4/3$. ### Problem 64 Show that the phantom scalar field violates all four energety conditions. ### Problem 65 Show that in the phantom scalar field $(w<-1)$ dominated Universe the condition $\dot{H}>0$ always holds. ### Problem 66 As we have seen in Chapter 3, the Friedman equations, describing spatially flat Universe, possess the duality, which connects the expanding and contracting Universe by appropriate transformation of the state equation. Consider the Universe where the weak energetic condition $\rho\ge0,\ \rho+p\ge0$ holds and show that the ideal liquid associated with the dual Universe is a phantom liquid or the cosmological constant. ### Problem 67 Show that the Friedman equations for the Universe filled with dark energy in the form of cosmological constant and a substance with the state equation $p=w\rho$ can be presented in the form of nonlinear oscillator (see M.Dabrowski arXiv:0307128 ) $\ddot X-\frac{D^2}{3}\Lambda X+D(D-1)kX^{1-2/D}=0$ where $X=a^{D(w)},\quad D(w)=\frac32(1+w).$ ### Problem 68 Show that the Universe dual to the one filled with a free scalar field, is described by the state equation $p=-3\rho$. ### Problem 69 Show that in the phantom component of dark energy the sound speed exceeds the light speed. ### Problem 70 Construct the phantom energy model with negative kinetic term in the potential satisfying the slow-roll conditions $\frac 1 V \frac{dV}{d\varphi}\ll1$ and $\frac 1 V \frac{d^2V}{d\varphi^2}\ll1.$ ## Disintegration of Bound Structures Historically the first criterion for decay of gravitationally bound systems due to the phantom dark energy was proposed by Caldwell, Kamionkowski and Weinberg (CKW) (see arXiv:astro-ph/0302506v1). The authors argue that a satellite orbiting around a heavy attracting body becomes unbound when total repulsive action of the dark energy inside the orbit exceeds the attraction of the gravity center. Potential energy of gravitational attraction is determined by the mass $M$ of the attracting center, while the analogous quantity for repulsive potential equals to $\rho+3p$ integrated over the volume inside the orbit. It results in the following rough estimate for the disintegration condition $$\label{disintegration} -\frac{4\pi}{3}(\rho+3p)R^3\simeq M.$$ ### Problem 71 Show that for $w\ge-1$ a system gravitationally bound at some moment of time (Milky Way for example) remains bound forever. ### Problem 72 Show that in the phantom energy dominated Universe any gravitationally bound system will dissociate with time. ### Problem 73 Show that in a Universe filled with non-relativistic matter a hydrogen atom will remain a bound system forever. ### Problem 74 Demonstrate, that any gravitationally bound system with mass $M$ and radius (linear scale) $R$, immersed in the phantom background $\left( {w < - 1} \right)$ will decay in time $t \simeq P\frac{|1+3w|}{|1+w|}\frac29\sqrt{\frac{3}{2\pi}}$ before Big Rip. Here $P=2\pi\sqrt{\frac{R^3}{GM}}$ is the period on the circular orbit of radius $R$ around the considered system. ### Problem 75 Use the result of the previous problem to determine the time of disintegration for the following systems: galaxy clusters, Milky Way, Solar System, Earth, hydrogen atom. Consider the case $w=-3/2$. ## Big Rip, Pseudo Rip, Little Rip The future finite-time singularity is an essential element of phantom cosmology (see S.Nojiri, S. Odintsov, arXiv:hep-th/0505215). One may classify the future singularities as in the following way (see S.Nojiri, S. Odintsov and S.Tsajikava, arXiv:hep-th/0501025): 1. For $t\to t_s$, $a\to\infty$, $\rho\to\infty$, $|p|\to\infty$ ("Big Rip"). The density of phantom dark energy and scale factor become infinite at some finite time $t_s$. 2. For $t\to t_s$, $a\to a_s$, $\rho\to\rho_s$ or $\rho\to0$, $|p|\to\infty$ ("sudden singularity"). The condition $w<-1$ is necessary for future singularities, but it is not sufficient. If $w$ approaches to $-1$ sufficiently rapidly, then it is possible to have a model in which there are no future singularities. Models without future singularities in which $\rho_{DE}$ increases with time will eventually lead to dissolution of bound systems. This process received the name "Little Rip" (see P.Frampton, K.Ludwick and R.Scherrer, arXiv:1106.4996). In the Big Rip the scale factor and energy density diverge at finite future time. As opposed to Big Rip in the $\Lambda$CDM, there is no such divergence. Little Rip represents an interpolation between these two limit cases. 3. For $t\to t_s$, $a\to a_s\ne0$, $\rho\to\infty$, $|p|\to\infty$. 4. For $t\to t_s$, $a\to a_s\ne0$, $\rho\to\rho_s$ (including $\rho_s=0$), while derivatives of $H$ diverge. Here $t_s$, $a_s\ne0$ and $\rho_s$ are constants. ### Problem 76 For the flat Universe composed of matter $(\Omega_m\simeq0.3)$ and phantom energy $(w=-1.5)$ find the time interval left to the Big Rip. Immediate consequence of approaching the Big Rip is the dissociation of bound systems due to negative pressure inside them. ### Problem 77 Show that all little-rip models can be described by condition $\ddot f>0$ where $f(t)$ is a nonsingular function such that $a(t)=\exp[f(t)]$. ### Problem 78 Consider the approach of the following authors (see S. Nojiri, S.D. Odintsov, and S. Tsujikawa, Phys. Rev. D 71, 063004 (2005); S. Nojiri and S.D. Odintsov, Phys. Rev. D 72, 023003 (2005); H. Stefancic, Phys. Rev. D 71, 084024 (2005)), who expressed the pressure as a function of the density in the form $p=-\rho-f(\rho).$ Show that condition $f(\rho>0)$ ensures that the density increases with the scale factor. ### Problem 79 Find the dependencies $a(\rho)$ and $t(\rho)$ for the case of flat Universe filled by a substance with the following state equation $p=-\rho-f(\rho).$ ### Problem 80 Solve the previous problem in the case of $$f(\rho)A\rho^\alpha,\ \alpha=const.$$ ### Problem 81 Find the condition for big-rip singularity in the case $p=-\rho-f(\rho).$ ### Problem 82 Show that taking a power law for $f(\rho)$, namely $f(\rho)=A\rho^\alpha$ a future singularity can be avoided for $\alpha\le1/2$. ### Problem 83 Solve the previous problem using the condition for absence of future singularities obtained in Problem [#RIPS_1]. ### Problem 84 Formulate the condition for the absence of a finite-time future (Big Rip) singularity in terms of function $\rho(a)$ . The problems below develop an alternative approach to investigate the singularities in the phantom Universe (see P-H. Chavanis, arXiv:1208.1195) ### Problem 85 Consider the polytropic equation of state $p=\alpha\rho+k\rho^{1+1/n}\equiv-\rho+\rho\left(1+\alpha+k\rho^{1/n}\right)$ under assumption $-1<\alpha\le1$. The case $\alpha=-1$ is treated separately in Problem \ref{RIPS_7_4}. The additional assumption $1+\alpha+k\rho^{1/n}\le0$ (and necessary condition $k<0$) guarantees that the density increases with the scale factor. This corresponds to phantom Universe. Find explicit dependence $\rho(a)$ and analyze limits $a\to0$ and $a\to\infty$. ### Problem 86 Consider the previous problem with $\alpha=-1$ and $k<0$. This equation of state was introduced by Nojiri and Odintsov (see problem \ref{RIPS_7_3}). Chavanis re-derives their results in a more transparent form. The little-rip dissociates all bound structures, but the strength of the dark energy is not enough to rip apart space-time as there is no finite-time singularity P. Frampton, K. Ludwick1, and R. Scherrer (see A. Astashenok, S. Nojiri, S. Odintsov, and R. Scherrer, arXiv:1203.1976) ### Problem 87 Show that for any bound system the rip always occurs when either $H$ diverges or $\dot H$ diverges (assuming $\dot H>0$ ( expansion of Universe is accelerating)). ### Problem 88 Solve the previous problem in terms of function $f(\rho)$. ### Problem 89 Perform analysis of possible singularities in terms of characteristics of the scalar field $\varphi$ with the potential $V(\varphi)$. All the Big Rip, Little Rip and Pseudo-Rip arise from the assumption that the dark energy density $\rho(a)$ is monotonically increasing. Let us investigate what will happen if this assumption is broken and then propose a so-called "Quasi-Rip" scenario, which is driven by a type of quintom dark energy. In this work, we consider an explicit model of Quasi-Rip in details. We show that Quasi-Rip has an unique feature different from Big Rip, Little Rip and Pseudo-Rip. Our universe has a chance to be rebuilt in the ash after the terrible rip. This might be the last hope in the "hopeless" rip. ### Problem 90 So-called soft singularities are characterized by a diverging $\ddot a$ whereas both the scale factor $a$ and $\dot a$ are finite. Analyze features of intersections between the soft singularities and geodesics. ### Problem 91 (see F.Cannata, A. Kamenshchik, D.Regoli, arXiv:0801.2348) The power law cosmological evolution $a(t)\propto t^\beta$ leads to the Hubble parameter $H(t)\propto 1/t$. Consider a "softer" version of the cosmological evolution given by the law $H(t)=\frac{S}{t^\alpha},$ where $S$ is a positive constant and $0<\alpha<1$. Analyze the dynamics of such model at $t\to 0$. ### Problem 92 Reconstruct the potential of the scalar field model, producing the given cosmological evolution $H(t)$. ### Problem 93 Reconstruct the potential of the scalar field model, producing the cosmological evolution $$H(t)=\frac{S}{t^\alpha},\label{RIPS_68}$$ using the technique described in the previous problem. ## The Statefinder In the models including dark energy in different forms it is useful to introduce a pair of cosmological parameters $\{r,s\}$, which is called the statefinder (see V.Sahni, T.Saini, A.Starobinsky, U.Alam astro-ph/0201498): $r\equiv\frac{\dddot a}{aH^3},\ s\equiv\frac{r-1}{3(q-1/2)}.$ These dimensionless parameters are constructed from the scale factor and its derivatives. Parameter $r$ is the next member in the sequence of the kinematic characteristics describing the Universe's expansion after the Hubble parameter $H$ and the deceleration parameter $q$ (see Chapter "Cosmography"). Parameter $s$ is the combination of $q$ and $r$ chosen in such a way that it is independent of the dark energy density. The values of these parameters can be reconstructed with high precision basing on the available cosmological data. After that the statefinder can be successfully used to identify different dark energy models. ### Problem 94 Explain the advantages for the description of the current Universe's dynamics brought by the introduction of the statefinder. ### Problem 95 Express the statefinder $\{r,s\}$ in terms of the total density, pressure and their time derivatives for a spatially flat Universe. ### Problem 96 Show that for a flat Universe filled with a two-component liquid composed of non--relativistic matter (dark matter + baryons) and dark energy with relative density $\Omega _{DE} = \rho _{DE} /\rho _{cr}$ the statefinder takes the form $$r = 1 + {\frac92}\Omega _{DE} w(1 + w) - {\frac32}\Omega _{DE} {\frac{\dot w}{H}};$$ $$s = 1 + w - {\frac13}{\frac{\dot w}{wH}};\quad w \equiv {\frac{p_{DE} } {\rho _{DE} }}.$$ ### Problem 97 Express the statefinder in terms of Hubble parameter $H(z)$ and its derivatives. ### Problem 98 Find the statefinders a) for dark energy in the form of cosmological constant; b) for the case of time--independent state equation parameter $w$; c) for dark energy in the form of quintessence. ### Problem 99 Express the photometric distance $d_L(z)$ through the current values of parameters $q$ and $s$. ## Crossing the Phantom Divide In the quintessence model of dark energy $-1<w<-1/3$. In the phantom model with negative kinetic energy $w<-1$. Recent cosmological data seem to indicate that there occurred the crossing of the phantom divide line in the near past. This means that equation of state parameter $w_{DE}$ crosses the phantom divide line $w_{DE}=-1$. This crossing to the phantom region is possible neither for an ordinary minimally coupled scalar field nor for a phantom field. There are at least three ways to solve this problem. If dark energy behaves as quintessence at early stage, and evolves as phantom at the later stage, a natural suggestion would be to consider a 2-field model (quintom model): a quintessence and a phantom. The next possibility, discussed in the next Chapter, is to consider an interacting model, in which dark energy interacts with dark matter. Yet another possibility would be that General Relativity fails at cosmological scales. In this case quintessence or phantom energy can cross the phantom divide line in a modified gravity theory. We investigate this approach in Chapter 12. ### Problem 100 Show that at the point of transition between the quintessence and the phantom phases $\dot H$ vanishes. ### Problem 101 Show that the sound speed of a single perfect barotropic fluid is diverges when $w$ crosses the phantom divide line. ### Problem 102 Find a dynamical law for the equation of state parameter $w=p/\rho$ in the barotropic cosmic fluid (see N.Caplar, H.Stefancic, arXiv:1208.0449). ### Problem 103 Using the results of previous problem, find the functions $w(z)$, $\rho(z)$ and $p(z)$ for the simplest possibility $c_S=const$. ### Problem 104 Realize the procedure described in the problem [#DE117_11] for the case of a minimally coupled scalar field $\varphi$ with potential $V(\varphi)$ in a spatially flat Universe. ### Problem 105 Consider the case of Universe filled with non-relativistic matter and quintessence and show that the condition to cross the phantom divide line $w=-1$ is equivalent to sign change in the following expression $\frac{dH^2(z)}{dz}-3\Omega_{m0}H_0^2(1+z)^2.$ ### Problem 106 Consider a model with the scale factor of the form $a=a_c\left(\frac{t}{t-t_s}\right)^n,$ where $a_c$ is a constant, $n>0$, $t_s$ is the time of a Big Rip singularity. Show that on the interval $0<t<t_s$ there is crossing of the phantom divide line $w=-1$. ### Problem 107 Show, that for the model considered in the previous problem the parameter $H(t)$ and density $\rho(t)$ achieve their minimal values at the phantom divide point. (see K.Bamba, S.Capozziello, S.Nojiri, S.Odintsov, arXiv:1205.3421) ### Problem 108 Find condition of intersection with the line $w=-1$ for the quintom Lagrangian $L=\frac12g^{\mu\nu}\left(\frac{\partial\varphi}{\partial x^\mu}\frac{\partial\varphi}{\partial x^\nu} - \frac{\partial\psi}{\partial x^\mu}\frac{\partial\psi}{\partial x^\nu} \right)-W(\varphi,\psi).$
proofpile-shard-0030-315
{ "provenance": "003.jsonl.gz:316" }
## nthenic_oftime one year ago Use even and odd properties of the trigonometric functions to find the exact value of the expression. csc(-Pi/6) 1. anonymous $\csc \left( -\frac{ \pi }{ 6 } \right)=-\csc \frac{ \pi }{ 6 }=\frac{ -1 }{ \sin \frac{ \pi }{ 6 } }=\frac{ -1 }{ \frac{ 1 }{ 2 } }=-2$ 2. nthenic_oftime thank you :)
proofpile-shard-0030-316
{ "provenance": "003.jsonl.gz:317" }
# Synopsis: What lies beneath A carpet cloak hides a chunky object sitting on a mirror, reflecting light as though only the mirror were there. Carpet cloaks hide an object under a smooth bump that appears flat to the observer—at least over a range of wavelengths of light. In order to distort light so as to hide an object, a carpet cloak must be made out of an anisotropic material. So far, scientists have focused on using metamaterials—artificial structures built out of subwavelength elements—to construct such cloaks, but these designs rely on expensive nano- or microfibrication, have so far been limited to infrared wavelengths or longer, and aren’t practical for hiding objects more than a millimeter in size. Now, Baile Zhang and colleagues at the Singapore-MIT Alliance for Research and Technology Center announce in a paper appearing in Physical Review Letters that they have fabricated a carpet cloak that hides macroscopic objects over a broad range of visible wavelengths. They make their carpet cloak out of calcite—a naturally anisotropic material—split into two crystallographic blocks with different orientation of their respective principal refractive indices. They place the cloak over a steel wedge, $2\phantom{\rule{0.333em}{0ex}}\text{mm}$ high and $38\phantom{\rule{0.333em}{0ex}}\text{mm}$ wide, which sits on a mirror, and illuminate it with a beam of green polarized light. As evidence that the cloak hides the wedge, they see the reflection of the beam from only the underlying mirror. Although Zhang et al.’s calcite cloak currently only works with polarized light and for a specific geometry, it is inexpensive, absorbs little light, and could ultimately hide larger objects. – Manolis Antonoyiannakis More Features » ### Announcements More Announcements » Optics Fluid Dynamics ## Next Synopsis Atomic and Molecular Physics ## Related Articles Optics ### Focus: How to Study a Speck of Dust A new technique allows the capture and study of a single dust particle just 34 nanometers wide, nearly 10 times smaller than the previous limit. Read More » Optics ### Synopsis: Controlling Light with Trembling Nanoparticles The scattering of light from vibrating particles could be harnessed to build directional devices such as optical diodes. Read More » Materials Science ### Synopsis: Fresh Light on Nonthermal Electrons An ultrafast photoemission experiment characterizes the processes by which photoexcited electrons in graphite return to thermal equilibrium. Read More »
proofpile-shard-0030-317
{ "provenance": "003.jsonl.gz:318" }
20 views In the following question, select the option that is related to the third word/series in the same way as the second word is related to the first. $\text{AEZ : FPY : : BGX : ?}$ 1. $\text{GRW}$ 2. $\text{IYY}$ 3. $\text{HTX}$ 4. $\text{HYW}$ 1 25 views
proofpile-shard-0030-318
{ "provenance": "003.jsonl.gz:319" }
# Bell's theorem  Bell's theorem Bell's theorem is a theorem that shows that the predictions of quantum mechanics (QM) are not intuitive, and touches upon fundamental philosophical issues that relate to modern physics. It is the most famous legacy of the late physicist John S. Bell. Bell's theorem is a no-go theorem, stating that: Einstein was critical of the standard interpretation of quantum mechanics. The EPR paper showed that the standard interpretation of quantum mechanics implies "spooky action-at-a-distance" and therefore is not a complete theory. Einstein wanted to get rid of the "action-at-a-distance" by introducing "local hidden variables." Einstein pursued this goal for the rest of his life, between 1935 and 1955, and even after his death the problem seemed worth the effort of many persons, mainly theorists and philosophers. But finally,Bell's theorem, published in 1964, proved once and for all that the problem could be decided by experiments: it is possible to construct experiments in which it is impossible for any kind of interpretation based on "local hidden variables" to give the same predictions as quantum mechanics, providing a means of testing whether "action-at-a-distance" actually occurs. Overview As in the situation explored in the EPR paradox, Bell considered an experiment in which a source produces pairs of correlated particles. For example, a pair of particles with correlated spins is created; one particle is sent to Alice and the other to Bob. On each trial, each observer independently chooses between various detector settings and then performs an independent measurement on the particle. (Note: although the correlated property used here is the particles' spin, it could alternatively be any correlated "quantum state" that encodes exactly one quantum bit.) When Alice and Bob measure the spin of the particles along the same axis (but in opposite directions), they get identical results 100% of the time.But when Bob measures at orthogonal (right) angles to Alice's measurements, they get identical results only 50% of the time.In terms of mathematics, the two measurements have a correlation of 1, or "perfect" correlation when read the same way; when read at right angles, they have a correlation of 0; no correlation. (A correlation of -1 would indicate getting "opposite" results for each measurement.) So far, the results can be explained by positing local hidden variables — each pair of particles may have been sent out with instructions on how to behave when measured in the two axes (either '+' or '-' for each axis). Clearly, if the source only sends out particles whose instructions are identical for each axis, then when Alice and Bob measure on the same axis, they are bound to get identical results, either (+,+) or (-,-); but (if all four possible pairings of + and - instructions are generated equally) when they measure on perpendicular axes they will see zero correlation. Now, consider that Alice or Bob can rotate their apparatus relative to each other by any amount at any time before measuring the particles, even "after" the particles leave the source. If local hidden variables determine the outcome of such measurements, they must encode at the time of leaving the source a result for every possible eventual direction of measurement, not just for the results in one particular axis. Bob begins this experiment with his apparatus rotated by 45 degrees. We call Alice's axes $a$ and $a\text{'}$, and Bob's rotated axes $b$ and $b\text{'}$. Alice and Bob then record the directions they measured the particles in, and the results they got. At the end, they will compare and tally up their results, scoring +1 for each time they got the "same" result and -1 for an "opposite" result - "except" that if Alice measured in $a$ and Bob measured in $b\text{'}$, they will score +1 for an "opposite" result and -1 for the "same" result. Using that scoring system, any possible combination of hidden variables would produce an expected average score of at most +0.5. (For example, see the table below, where the most correlated values of the hidden variables have an average correlation of +0.5, i.e. 75% identical. The unusual "scoring system" ensures that maximum average expected correlation is +0.5 for any possible system that relies on local hidden variables.) Bell's Theorem shows that if the particles behave as predicted by quantum mechanics, Alice and Bob can score higher than the classical hidden variable prediction of +0.5 correlation; if the apparatuses are rotated at 45° to each other, quantum mechanics predicts that the expected average score is 0.71. (Quantum prediction in detail: When observations at an angle of $heta$ are made on two entangled particles, the predicted correlation is $cos heta$. The correlation is equal to the length of the projection of the particle's vector onto his measurement vector; by trigonometry, $cos heta$. $heta$ is 45°, and $cos heta$ is $frac\left\{sqrt\left\{2\left\{2\right\}$, for all pairs of axes except $\left(a, b\text{'}\right)$ – where they are 135° and $- frac\left\{sqrt\left\{2\left\{2\right\}$ – but this last is taken in negative in the agreed scoring system, so the overall score is $frac\left\{sqrt\left\{2\left\{2\right\}$; 0.707. In one explanation, the particles behave as if when Alice or Bob makes a measurement, the other particle usually switches to take that direction instantaneously.) Multiple researchers have performed equivalent experiments using different methods. It appears most of these experiments produce results which agree with the predictions of quantum mechanics [http://plato.stanford.edu/entries/bell-theorem/#3] , leading to disproof of local-hidden-variable theories and proof of nonlocality. Still not everyone agrees with these findings [http://arxiv.org/abs/quant-ph/9611037] . There have been two loopholes found in the earlier of these experiments, the [http://plato.stanford.edu/entries/bell-theorem/#4 detection loophole] and the [http://plato.stanford.edu/entries/bell-theorem/#5 communication loophole] with associated experiments to close these loopholes. After all current experimentation it seems these experiments uphold prima facie support for quantum mechanics' predictions of nonlocality [http://plato.stanford.edu/entries/bell-theorem/#7] . Importance of the theorem This theorem has even been called "the most profound in science." [Stapp, 1975] Bell's seminal 1964 paper was entitled "On the Einstein Podolsky Rosen paradox." [http://www.drchinese.com/David/Bell_Compact.pdf J. S. Bell, "On the Einstein Podolsky Rosen Paradox", Physics 1, 195 (1964)] ] The Einstein Podolsky Rosen paradox (EPR paradox) proves, on basis of the assumption of "locality" (physical effects have a finite propagation speed) and "reality" (physical states exist before they are measured) that particle attributes have definite values independent of the act of observation. Bell showed that local realism leads to a requirement for certain types of phenomena that are not present in quantum mechanics. This requirement is called Bell's inequality. After EPR (Einstein-Podolsky-Rosen), quantum mechanics was left in an unsatisfactory position: either it was incomplete, in the sense that it failed to account for some elements of physical reality, or it violated the principle of finite propagation speed of physical effects. In a modified version of the EPR thought experiment, two observers, now commonly referred to as "Alice" and "Bob", perform independent measurements of spin on a pair of electrons, prepared at a source in a special state called a "spin singlet state". It was equivalent to the conclusion of EPR that once Alice measured spin in one direction (e.g. on the "x" axis), Bob's measurement in that direction was determined with certainty, with opposite outcome to that of Alice, whereas immediately before Alice's measurement, Bob's outcome was only statistically determined. Thus, either the spin in each direction is an element of physical reality, or the effects travel from Alice to Bob instantly. In QM, predictions were formulated in terms of probabilities — for example, the probability that an electron might be detected in a particular region of space, or the probability that it would have spin up or down. The idea persisted, however, that the electron in fact has a definite position and spin, and that QM's weakness was its inability to predict those values precisely. The possibility remained that some yet unknown, but more powerful theory, such as a "hidden variables theory", might be able to predict those quantities exactly, while at the same time also being in complete agreement with the probabilistic answers given by QM. If a "hidden variables theory" were correct, the hidden variables were not described by QM, and thus QM would be an incomplete theory. The desire for a "local realist theory" was based on two assumptions: # Objects have a definite state that determines the values of all other measurable properties, such as position and momentum. # Effects of local actions, such as measurements, cannot travel faster than the speed of light (as a result of special relativity). If the observers are sufficiently far apart, a measurement taken by one has no effect on the measurement taken by the other. In the formalization of local realism used by Bell, the predictions of theory result from the application of classical probability theory to an underlying parameter space. By a simple (but clever) argument based on classical probability, he then showed that correlations between measurements are bounded in a way that is violated by QM. Bell's theorem seemed to put an end to local realist hopes for QM. Per Bell's theorem, either quantum mechanics or local realism is wrong. Experiments were needed to determine which is correct, but it took many years and many improvements in technology to perform them. Bell test experiments to date overwhelmingly show that Bell inequalities are violated. These results provide empirical evidence against local realism and in favor of QM. The no-communication theorem proves that the observers cannot use the inequality violations to communicate information to each other faster than the speed of light. John Bell's paper examines both John von Neumann's 1932 proof of the incompatibility of hidden variables with QM, and Albert Einstein and his colleagues' seminal 1935 paper on the subject. Bell inequalities Bell inequalities concern measurements made by observers on pairs of particles that have interacted and then separated. According to quantum mechanics they are entangled while local realism limits the correlation of subsequent measurements of the particles. Different authors subsequently derived inequalities similar to Bell´s original inequality, collectively termed "Bell inequalities". All Bell inequalities describe experiments in which the predicted result assuming entanglement differs from that following from local realism. The inequalities assume that each quantum-level object has a well defined state that accounts for all its measurable properties and that distant objects do not exchange information faster than the speed of light. These well defined states are often called "hidden variables", the properties that Einstein posited when he stated his famous objection to quantum mechanics: "God does not play dice." Bell showed that under quantum mechanics, which lacks local hidden variables, the inequalities (the correlation limit) may be violated. Instead, properties of a particle are not clear to verify in quantum mechanics but may be correlated with those of another particle due to quantum entanglement, allowing their state to be well defined only after a measurement is made on either particle. That restriction agrees with the Heisenberg uncertainty principle, a fundamental and inescapable concept in quantum mechanics. In Bell's work: cquote Theoretical physicists live in a classical world, looking out into a quantum-mechanical world. The latter we describe only subjectively, in terms of procedures and results in our classical domain. (...) Now nobody knows just where the boundary between the classical and the quantum domain is situated. (...) More plausible to me is that we will find that there is no boundary. The wave functions would prove to be a provisional or incomplete description of the quantum-mechanical part. It is this possibility, of a homogeneous account of the world, which is for me the chief motivation of the study of the so-called "hidden variable" possibility. (...) A second motivation is connected with the statistical character of quantum-mechanical predictions. Once the incompleteness of the wave function description is suspected, it can be conjectured that random statistical fluctuations are determined by the extra "hidden" variables -- "hidden" because at this stage we can only conjecture their existence and certainly cannot control them. (...) A third motivation is in the peculiar character of some quantum-mechanical predictions, which seem almost to cry out for a hidden variable interpretation. This is the famous argument of Einstein, Podolsky and Rosen. (...) We will find, in fact, that no local deterministic hidden-variable theory can reproduce all the experimental predictions of quantum mechanics. This opens the possibility of bringing the question into the experimental domain, by trying to approximate as well as possible the idealized situations in which local hidden variables and quantum mechanics cannot agree In probability theory, repeated measurements of system properties can be regarded as repeated sampling of random variables. In Bell's experiment, Alice can choose a detector setting to measure either $A\left(a\right)$ or $A\left(a\text{'}\right)$ and Bob can choose a detector setting to measure either $B\left(b\right)$ or $B\left(b\text{'}\right)$. Measurements of Alice and Bob may be somehow correlated with each other, but the Bell inequalities say that if the correlation stems from local random variables, there is a limit to the amount of correlation one might expect to see. Original Bell's inequality The original inequality that Bell derived was:: $1 + operatorname\left\{C\right\}\left(b, c\right) geq |operatorname\left\{C\right\}\left(a, b\right) - operatorname\left\{C\right\}\left(a, c\right)|$, where "C" is the "correlation" of the particle pairs and "a", "b" and "c" settings of the apparatus. This inequality is not used in practice. For one thing, it is true only for genuinely "two-outcome" systems, not for the "three-outcome" ones (with possible outcomes of zero as well as +1 and −1) encountered in real experiments. For another, it applies only to a very restricted set of hidden variable theories, namely those for which the outcomes on both sides of the experiment are always exactly anticorrelated when the analysers are parallel, in agreement with the quantum mechanical prediction. There is a simple limit of Bell's inequality which has the virtue of being completely intuitive. If the result of three different statistical coin-flips A,B,C have the property that: #A and B are the same (both heads or both tails) 99% of the time #B and C are the same 99% of the time then A and C are the same at least 98% of the time. The number of mismatches between A and B (1/100) plus the number of mismatches between B and C (1/100) are together the maximum possible number of mismatches between A and C. In quantum mechanics, by letting A,B,C be the values of the spin of two entangled particles measured relative to some axis at 0 degrees, $heta$ degrees, and $2 heta$ degrees respectively, the overlap of the wavefunction between the different angles is proportional to $scriptstyle cos\left(S heta\right)approx 1-S^2 heta^2/2$. The probability that A and B give the same answer is $1-epsilon^2$, where $epsilon$ is proportional to $heta$. This is also the probability that B and C give the same answer. But A and C are the same $1-\left(2epsilon\right)^2$ of the time. Choosing the angle so that $epsilon=.1$, A and B are 99% correlated, B and C are 99% correlated and A and C are only 96% correlated. Imagine that two entangled particles in a spin singlet are shot out to two distant locations, and the spins of both are measured in the direction A. The spins are 100% correlated (actually, anti-correlated but for this argument that is equivalent). The same is true if both spins are measured in directions B or C. It is safe to conclude that any hidden variables which determine the A,B, and C measurements in the two particles are 100% correlated and can be used interchangeably. If A is measured on one particle and B on the other, the correlation between them is 99%. If B is measured on one and C on the other, the correlation is 99%. This allows us to conclude that the hidden variables determining A and B are 99% correlated and B and C are 99% correlated. But if A is measured in one particle and C in the other, the results are only 96% correlated, which is a contradiction. The intuitive formulation is due to David Mermin, while the small-angle limit is emphasized in Bell's original article. CHSH inequality In addition to Bell's original inequality, the form given by John Clauser, Michael Horne, Abner Shimony and R. A. Holt,J. F. Clauser, M. A. Horne, A. Shimony and R. A. Holt, "Proposed experiment to test local hidden-variable theories", Physical Review Letters 23, 880-884 (1969)] (the CHSH form) is especially important, as it gives classical limits to the expected correlation for the above experiment conducted by Alice and Bob::$\left(1\right) quad mathbf\left\{C\right\} \left[A\left(a\right), B\left(b\right)\right] + mathbf\left\{C\right\} \left[A\left(a\right), B\left(b\text{'}\right)\right] + mathbf\left\{C\right\} \left[A\left(a\text{'}\right), B\left(b\right)\right] - mathbf\left\{C\right\} \left[A\left(a\text{'}\right), B\left(b\text{'}\right)\right] leq 2,$where C denotes correlation. Correlation of observables "X", "Y" is defined as:$mathbf\left\{C\right\}\left(X,Y\right) = operatorname\left\{E\right\}\left(X Y\right).$This is a non-normalized form of the correlation coefficient considered in statistics (see Quantum correlation). In order to formulate Bell's theorem, we formalize local realism as follows: # There is a probability space $Lambda$ and the observed outcomes by both Alice and Bob result by random sampling of the parameter $lambda in Lambda$. # The values observed by Alice or Bob are functions of the local detector settings and the hidden parameter only. Thus ::*Value observed by Alice with detector setting "a" is $A\left(a,lambda\right)$::*Value observed by Bob with detector setting "b" is $B\left(b,lambda\right)$ Implicit in assumption 1) above, the hidden parameter space $Lambda$ has a probability measure $ho$ and the expectation of a random variable "X" on $Lambda$ with respect to $ho$ is written :$operatorname\left\{E\right\}\left(X\right) = int_Lambda X\left(lambda\right) ho\left(lambda\right) d lambda$ where for accessibility of notation we assume that the probabilitymeasure has a density. Bell's inequality. The CHSH inequality (1) holds under the hidden variables assumptions above. For simplicity, let us first assume the observed values are +1 or −1; we remove this assumption in Remark 1 below. Let $lambda in Lambda$. Then at least one of :$B\left(b, lambda\right) + B\left(b\text{'}, lambda\right), quad B\left(b, lambda\right) - B\left(b\text{'}, lambda\right)$ is 0. Thus :$A\left(a, lambda\right) B\left(b, lambda\right) + A\left(a, lambda\right) B\left(b\text{'}, lambda\right) +A\left(a\text{'}, lambda\right) B\left(b, lambda\right) - A\left(a\text{'}, lambda\right) B\left(b\text{'}, lambda\right) =$ : $= A\left(a, lambda\right) \left(B\left(b, lambda\right) + B\left(b\text{'}, lambda\right)\right)+ A\left(a\text{'}, lambda\right) \left(B\left(b, lambda\right) - B\left(b\text{'}, lambda\right)\right) quad$ :$leq 2.$ and therefore :$mathbf\left\{C\right\}\left(A\left(a\right), B\left(b\right)\right) + mathbf\left\{C\right\}\left(A\left(a\right), B\left(b\text{'}\right)\right) + mathbf\left\{C\right\}\left(A\left(a\text{'}\right), B\left(b\right)\right) - mathbf\left\{C\right\}\left(A\left(a\text{'}\right), B\left(b\text{'}\right)\right) =$ :$= int_Lambda A\left(a, lambda\right) B\left(b, lambda\right) ho\left(lambda\right) d lambda + int_Lambda A\left(a, lambda\right) B\left(b\text{'}, lambda\right) ho\left(lambda\right) d lambda + int_Lambda A\left(a\text{'}, lambda\right) B\left(b, lambda\right) ho\left(lambda\right) d lambda - int_Lambda A\left(a\text{'}, lambda\right) B\left(b\text{'}, lambda\right) ho\left(lambda\right) d lambda =$ : : :$leq 2.$ Remark 1. The correlation inequality (1) still holds if thevariables $A\left(a,lambda\right)$, $B\left(b,lambda\right)$ are allowed totake on any real values between -1, +1. Indeed, the relevant idea is that eachsummand in the above average is bounded above by 2. This is easilyseen to be true in the more general case: :$A\left(a, lambda\right) B\left(b, lambda\right) + A\left(a, lambda\right) B\left(b\text{'}, lambda\right) + A\left(a\text{'}, lambda\right) B\left(b, lambda\right) - A\left(a\text{'}, lambda\right) B\left(b\text{'}, lambda\right) =$ :$= A\left(a, lambda\right) \left(B\left(b, lambda\right) + B\left(b\text{'}, lambda\right)\right) +A\left(a\text{'}, lambda\right) \left(B\left(b, lambda\right) - B\left(b\text{'}, lambda\right)\right) quad$ : : :$leq |B\left(b, lambda\right) + B\left(b\text{'}, lambda\right)| +| B\left(b, lambda\right) - B\left(b\text{'}, lambda\right)| leq 2.$ To justify the upper bound 2 asserted in the last inequality, without loss of generality, we canassume that : $B\left(b, lambda\right) geq B\left(b\text{'}, lambda\right) geq 0.$ In that case :$|B\left(b, lambda\right) + B\left(b\text{'}, lambda\right)| +| B\left(b, lambda\right) - B\left(b\text{'}, lambda\right)| =B\left(b, lambda\right) + B\left(b\text{'}, lambda\right) + B\left(b, lambda\right) - B\left(b\text{'}, lambda\right) = quad$ :$= 2 B\left(b, lambda\right) leq 2. quad$ Remark 2. Though the important component of the hidden parameter $lambda$ in Bell's original proof is associated with the source and is shared by Alice and Bob, there may be others that are associated with the separate detectors, these others being independent. This argument was used by Bell in 1971, and again by Clauser and Horne in 1974,J. F. Clauser and M. A. Horne, "Experimental consequences of objective local theories", Physical Review D, 10, 526-35 (1974)] to justify a generalisation of the theorem forced on them by the real experiments, in which detectors were never 100% efficient. The derivations were given in terms of the "averages" of the outcomes over the local detector variables. The formalisation of local realism was thus effectively changed, replacing A and B by averages and retaining the symbol $lambda$ but with a slightly different meaning. It was henceforth restricted (in most theoretical work) to mean only those components that were associated with the source. However, with the extension proved in Remark 1, CHSH inequality still holds even if the instruments themselves contain hidden variables. In that case,averaging over the instrument hidden variables gives new variables: : $overline\left\{A\right\}\left(a, lambda\right), quad overline\left\{B\right\}\left(b, lambda\right)$on $Lambda$ which still have values in the range [-1, +1] to which we can apply the previous result. Bell inequalities are violated by quantum mechanical predictions In the usual quantum mechanical formalism, the observables "X" and "Y" are represented as self-adjoint operators on a Hilbert space. To compute the correlation, assume that "X" and "Y" are represented by matrices in a finite dimensional space and that "X" and "Y" commute; this special case suffices for our purposes below. The von Neumann measurement postulate states: a series of measurements of an observable "X" on a series of identical systems in state $phi$ produces a distribution of real values. By the assumption that observables are finite matrices, this distribution is discrete. The probability of observing λ is non-zero if and only if λ is an eigenvalue of the matrix "X" and moreover the probability is :$|operatorname\left\{E\right\}_X\left(lambda\right) phi|^2$ where E"X" (λ) is the projector corresponding to the eigenvalue λ. The system state immediately after the measurement is :$|operatorname\left\{E\right\}_X\left(lambda\right) phi|^\left\{-1\right\} operatorname\left\{E\right\}_X\left(lambda\right) phi.$ From this, we can show that the correlation of commuting observables "X" and "Y" in a pure state $psi$ is :$langle X Y angle = langle X Y psi mid psi angle.$ We apply this fact in the context of the EPR paradox. The measurements performed by Alice and Bob are spin measurements on electrons. Alice can choose between two detector settings labelled "a" and "a"&prime;; these settings correspond to measurement of spin along the "z" or the "x" axis. Bob can choose between two detector settings labelled "b" and "b"&prime;; these correspond to measurement of spin along the "z"&prime; or "x"&prime; axis, where the "x"&prime; &ndash; "z"&prime; coordinate system is rotated 45° relative to the "x" &ndash; "z" coordinate system. The spin observables are represented by the 2 &times; 2 self-adjoint matrices: : These are the Pauli spin matrices normalized so that the corresponding eigenvalues are +1, −1. As is customary, we denote the eigenvectors of "S""x" by :$left|+x ight ang, quad left|-x ight ang.$ Let $phi$ be the spin singlet state for a pair of electrons discussed in the EPR paradox. This is a specially constructed state described by the following vector in the tensor product: Now let us apply the CHSH formalism to the measurements that can be performed by Alice and Bob. :$A\left(a\right) = S_z otimes I$ :$A\left(a\text{'}\right) = S_x otimes I$ :$B\left(b\right) = - frac\left\{1\right\}\left\{sqrt\left\{2 I otimes \left(S_z + S_x\right)$ :$B\left(b\text{'}\right) = frac\left\{1\right\}\left\{sqrt\left\{2 I otimes \left(S_z - S_x\right).$ The operators $B\left(b\text{'}\right)$, $B\left(b\right)$ correspond to Bob's spin measurements along "x"&prime; and "z"&prime;. Note that the "A" operators commute with the "B" operators, so we can apply our calculation for the correlation. In this case, we can show that the CHSH inequality fails. In fact, a straightforward calculation shows that :$langle A\left(a\right) B\left(b\right) angle = langle A\left(a\text{'}\right) B\left(b\right) angle =langle A\left(a\text{'}\right) B\left(b\text{'}\right) angle = frac\left\{1\right\}\left\{sqrt\left\{2,$ and :$langle A\left(a\right) B\left(b\text{'}\right) angle = - frac\left\{1\right\}\left\{sqrt\left\{2.$ so that :$langle A\left(a\right) B\left(b\right) angle + langle A\left(a\text{'}\right) B\left(b\text{'}\right) angle + langle A\left(a\text{'}\right) B\left(b\right) angle - langle A\left(a\right) B\left(b\text{'}\right) angle = frac\left\{4\right\}\left\{sqrt\left\{2 = 2 sqrt\left\{2\right\} > 2.$ Bell's Theorem: If the quantum mechanical formalism is correct, then the system consisting of a pair of entangled electrons cannot satisfy the principle of local realism. Note that $2 sqrt\left\{2\right\}$ is indeed the upper bound for quantum mechanics called Tsirelson's bound. The operators giving this maximal value are always isomorphic to the Pauli matrices. Practical experiments testing Bell's theorem Experimental tests can determine whether the Bell inequalities required by local realism hold up to the empirical evidence. Bell's inequalities are tested by "coincidence counts" from a Bell test experiment such as the optical one shown in the diagram. Pairs of particles are emitted as a result of a quantum process, analysed with respect to some key property such as polarisation direction, then detected. The setting (orientations) of the analysers are selected by the experimenter. Bell test experiments to date overwhelmingly violates Bell's inequality. Indeed, a table of Bell test experiments performed prior to 1986 is given in 4.5 of Redhead, 1987. [ M. Redhead, "Incompleteness, Nonlocality and Realism", Clarendon Press (1987)] Of the thirteen experiments listed, only two reached results contradictory to quantum mechanics; moreover, according to the same source, when the experiments were repeated, "the discrepancies with QM could not be reproduced". Nevertheless, the issue is not conclusively settled. According to Shimony's 2004 Stanford Encyclopedia overview article: [ [http://plato.stanford.edu/entries/bell-theorem Article on "Bell's Theorem"] by Abner Shimony in the Stanford Encyclopedia of Philosophy, (2004).] To explore the 'detection loophole', one must distinguish the classes of homogeneous and inhomogeneous Bell inequality. The standard assumption in Quantum Optics is that "all photons of given frequency, direction and polarization are identical" so that photodetectors treat all incident photons on an equal basis. Such a "fair sampling" assumption generally goes unacknowledged, yet it effectively limits the range of local theories to those which conceive of the light field as corpuscular. The assumption excludes a large family of local realist theories, in particular, Max Planck's description. We must remember the cautionary words of Albert Einstein [A. Einstein in "Correspondance Einstein-Besso", p.265 (Herman, Paris, 1979)] shortly before he died: "Nowadays every Tom, Dick and Harry ('jeder Kerl' in German original) thinks he knows what a photon is, but he is mistaken". Objective physical properties for Bell’s analysis ("local realist theories") include the wave amplitude of a light signal. Those who maintain the concept of duality, or simply of light being a wave, recognize the possibility or actuality that the emitted atomic light signals have a range of amplitudes and, furthermore, that the amplitudes are modified when the signal passes through analyzing devices such as polarizers and beam splitters. It follows that not all signals have the same detection probability (Marshall and Santos 2002 [http://www.crisisinphysics.co.uk/optrev.pdf] ). Two classes of Bell inequalities The "fair sampling" problem was faced openly in the 1970s. In early designs of their 1973 experiment, Freedman and ClauserS. J. Freedman and J. F. Clauser, "Experimental test of local hidden-variable theories", Phys. Rev. Lett. 28, 938 (1972)] used "fair sampling" in the form of the Clauser-Horne-Shimony-Holt (CHSH) hypothesis. However, shortly afterwards Clauser and Horne made the important distinction between inhomogeneous (IBI) and homogeneous (HBI) Bell inequalities. Testing an IBI requires that we compare certain coincidence rates in two separated detectors with the singles rates of the two detectors. Nobody needed to perform the experiment, because singles rates with all detectors in the 1970s were at least ten times all the coincidence rates. So, taking into account this low detector efficiency, the QM prediction actually satisfied the IBI. To arrive at an experimental design in which the QM prediction violates IBI we require detectors whose efficiency exceeds 82% for singlet states, but have very low dark rate and short dead and resolving times. This is well above the 30% achievable (Brida et al. 2006 [http://arxiv.org/abs/quant-ph/0612075v1] ) so Shimony’s optimism in the Stanford Encyclopedia, quoted in the preceding section, appears over-stated. Practical challenges Because detectors don't detect a large fraction of all photons, Clauser and Horne recognized that testing Bell's inequality requires some extra assumptions. They introduced the "No Enhancement Hypothesis" (NEH): a light signal, originating in an atomic cascade for example, has a certain probability of activating a detector. Then, if a polarizer is interposed between the cascade and the detector, the detection probability cannot increase. Given this assumption, there is a Bell inequality between the coincidence rates with polarizers and coincidence rates without polarizers. The experiment was performed by Freedman and Clauser, who found that the Bell's inequality was violated. So the no-enhancement hypothesis cannot be true in a local hidden variables model. The Freedman-Clauser experiment reveals that local hidden variables imply the new phenomenon of signal enhancement: In the total set of signals from an atomic cascade there is a subset whose detection probability increases as a result of passing through a linear polarizer. This is perhaps not surprising, as it is known that adding noise to data can, in the presence of a threshold, help reveal hidden signals (this property is known as stochastic resonance [http://prola.aps.org/abstract/RMP/v70/i1/p223_1] ). One cannot conclude that this is the only local-realist alternative to Quantum Optics, but it does show that the word loophole is biased. Moreover, the analysis leads us to recognize that the Bell-inequality experiments, rather than showing a breakdown of realism or locality, are capable of revealing important new phenomena. Theoretical challenges Some advocates of the hidden variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell's inequality by means of a "non-local" hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. A recent experiment ruled out a large class of non-Bohmian "non-local" hidden variable theories [http://www.nature.com/nature/journal/v446/n7138/abs/nature05677.html] If the hidden variables can communicate with each other faster than light, Bell's inequality can easily be violated. Once one particle is measured, it can communicate the necessary correlations to the other particle. Since in relativity the notion of simultaneity is not absolute, this is unattractive. One idea is to replace instantaneous communication with a process which travels backwards in time along the past light cone. This is the idea behind a transactional interpretation of quantum mechanics, which interprets the statistical emergence of a quantum history as a gradual coming to agreement between histories that go both forward and backward in time [Cramer, John G. "The Transactional Interpretation of Quantum Mechanics", Reviews of Modern Physics 58, 647-688, July 1986] . Recent controversial work by Joy Christian J Christian, "Disproof of Bell's Theorem by Clifford Algebra Valued Local Variables" (2007) http://arxiv.org/abs/quant-ph/0703179] claims that a deterministic, arguably local, and realistic theory can violate Bell's inequalities if the observables are chosen to be non-commuting numbers rather than commuting numbers as Bell had assumed. Christian claims that in this way the statistical predictions of quantum mechanics can be exactly reproduced. The controversy around his work concerns his noncommutative averaging procedure, in which the averages of products of variables at distant sites depend on the order in which they appear in an averaging integral. To many, this looks like nonlocal correlations, although Christian defines locality so that this type of thing is allowed J Christian, "Disproof of Bell's Theorem: Further Consolidations" (2007) http://arxiv.org/abs/0707.1333] J Christian, "Can Bell's Prescription for Physical Reality Be Considered Complete?" (2008) http://arxiv.org/abs/0806.3078] . In his work, Christian builds up a CM view of the Bell's experiment that respects the rotational entanglement of physical reality, which is included in the QM view by construction, as this property of reality manifests itself clearly in the spin of particles, but it is not usually taken into account in the classical realm. Upon building this classical view, Christian suggests that in essence, it is this property of reality that results in the increased values of Bell's inequalities and as a result a local, realistic theory can be constructed. Moreover, Christian suggests a completely macro-object experiment, consisting of thousands of metal spheres, could recreate the results of the usual experiments. The quantum mechanical wavefunction can also provide a local realistic description, if the wavefunction values are interpreted as the fundamental quantities that describe reality. Such an approach is called a many-worlds interpretation of quantum mechanics. In this controversial view, two distant observers both split into superpositions when measuring a spin. The Bell inequality violations are no longer counterintuitive, because it is not clear which copy of the observer B observer A will see when going to compare notes. If reality includes all the different outcomes, locality in physical space (not outcome space) places no restrictions on how the split observers can meet up. This implies that there is a subtle assumption in the argument that realism is incompatible with quantum mechanics and locality. The assumption, in its weakest form, is called counterfactual definiteness. This states that if the result of an experiment are always observed to be definite, there is a quantity which determines what the outcome would have been even if you don't do the experiment. Many worlds interpretations are not only counterfactually indefinite, they are factually indefinite. The results of all experiments, even ones that have been performed, are not uniquely determined. Final remarks The phenomenon of quantum entanglement that is behind violation of Bell's inequality is just one element of quantum physics which cannot be represented by any classical picture of physics; other non-classical elements are complementarity and wavefunction collapse. The problem of interpretation of quantum mechanics is intended to provide a satisfactory picture of these non-classical elements of quantum physics. The EPR paper "pinpointed" the unusual properties of the "entangled states", e.g. the above-mentioned singlet state, which is the foundation for present-day applications of quantum physics, such as quantum cryptography. This strange non-locality was originally supposed to be a Reductio ad absurdum, because the standard interpretation could easily do away with action-at-a-distance by simply assigning to each particle definite spin-states. Bell's theorem showed that the "entangledness" prediction of quantum mechanics have a degree of non-locality that cannot be explained away by any local theory. In well-defined "Bell experiments" (see the paragraph on "test experiments") one can now falsify either quantum mechanics or Einstein's quasi-classical assumptions : presently many experiments of this kind have been performed, and the experimental results support quantum mechanics, though some believe that detectors give a biased sample of photons, so that until nearly every photon pair generated is observed there will be loopholes. What is powerful about Bell's theorem is that it doesn't come from any particular physical theory. What makes Bell's theorem unique and has marked it as one of the most important advances in science is that it relies only on the general properties of quantum mechanics. No physical theory which assumes a deterministic variable inside the particle that determines the outcome, can account for the experimental results, only assuming that this variable cannot acausally change other variables far away. ee also * Bell test experiments * CHSH Bell test * Clauser and Horne's 1974 Bell test * Counterfactual definiteness * Leggett's inequality * Local hidden variable theory * Mott problem * Quantum entanglement * Quantum mechanical Bell test prediction * Measurement in quantum mechanics * Renninger negative-result experiment * GHZ State The following are intended for general audiences. * Amir D. Aczel, "Entanglement: The greatest mystery in physics" (Four Walls Eight Windows, New York, 2001). * A. Afriat and F. Selleri, "The Einstein, Podolsky and Rosen Paradox" (Plenum Press, New York and London, 1999) * J. Baggott, "The Meaning of Quantum Theory" (Oxford University Press, 1992) *N. David Mermin, "Is the moon there when nobody looks? Reality and the quantum theory", in "Physics Today", April 1985, pp. 38-47. * Brian Greene, "The Fabric of the Cosmos" (Vintage, 2004, ISBN 0-375-72720-5) * Nick Herbert, "Quantum Reality: Beyond the New Physics" (Anchor, 1987, ISBN 0-385-23569-0) * D. Wick, "The infamous boundary: seven decades of controversy in quantum physics" (Birkhauser, Boston 1995) * R. Anton Wilson, "Prometheus Rising" (New Falcon Publications, 1997, ISBN 1-56184-056-4) * Gary Zukav "The Dancing Wu Li Masters" (Perennial Classics, 2001, ISBN 0-06-095968-1) Notes References * A. Aspect et al., "Experimental Tests of Realistic Local Theories via Bell's Theorem", Phys. Rev. Lett. 47, 460 (1981) * A. Aspect et al., "Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell's Inequalities", Phys. Rev. Lett. 49, 91 (1982). * A. Aspect et al., "Experimental Test of Bell's Inequalities Using Time-Varying Analyzers", Phys. Rev. Lett. 49, 1804 (1982). * A. Aspect and P. Grangier, "About resonant scattering and other hypothetical effects in the Orsay atomic-cascade experiment tests of Bell inequalities: a discussion and some new experimental data", Lettere al Nuovo Cimento 43, 345 (1985) * J. S. Bell, "On the problem of hidden variables in quantum mechanics", Rev. Mod. Phys. 38, 447 (1966) * J. S. Bell, "Introduction to the hidden variable question", Proceedings of the International School of Physics 'Enrico Fermi', Course IL, Foundations of Quantum Mechanics (1971) 171-81 * J. S. Bell, "Bertlmann’s socks and the nature of reality", Journal de Physique, Colloque C2, suppl. au numero 3, Tome 42 (1981) pp C2 41-61 * J. S. Bell, "Speakable and Unspeakable in Quantum Mechanics" (Cambridge University Press 1987) [A collection of Bell's papers, including all of the above.] * J. F. Clauser and A. Shimony, "Bell's theorem: experimental tests and implications", Reports on Progress in Physics 41, 1881 (1978) * J. F. Clauser and M. A. Horne, Phys. Rev D 10, 526-535 (1974) * E. S. Fry, T. Walther and S. Li, "Proposal for a loophole-free test of the Bell inequalities", Phys. Rev. A 52, 4381 (1995) * E. S. Fry, and T. Walther, "Atom based tests of the Bell Inequalities - the legacy of John Bell continues", pp 103-117 of "Quantum [Un] speakables", R.A. Bertlmann and A. Zeilinger (eds.) (Springer, Berlin-Heidelberg-New York, 2002) * R. B. Griffiths, "Consistent Quantum Theory"', Cambridge University Press (2002). * L. Hardy, "Nonlocality for 2 particles without inequalities for almost all entangled states". Physical Review Letters 71 (11) 1665-1668 (1993) * M. A. Nielsen and I. L. Chuang, "Quantum Computation and Quantum Information", Cambridge University Press (2000) * P. Pearle, "Hidden-Variable Example Based upon Data Rejection", Physical Review D 2, 1418-25 (1970) * A. Peres, "Quantum Theory: Concepts and Methods", Kluwer, Dordrecht, 1993. * P. Pluch, "Theory of Quantum Probability", PhD Thesis, University of Klagenfurt, 2006. * B. C. van Frassen, "Quantum Mechanics", Clarendon Press, 1991. * M.A. Rowe, D. Kielpinski, V. Meyer, C.A. Sackett, W.M. Itano, C. Monroe, and D.J. Wineland, "Experimental violation of Bell's inequalities with efficient detection",(Nature, 409, 791-794, 2001). * S. Sulcs, "The Nature of Light and Twentieth Century Experimental Physics", Foundations of Science 8, 365–391 (2003) * S. Gröblacher et al., "An experimental test of non-local realism",(Nature, 446, 871-875, 2007). * [http://www.ncsu.edu/felder-public/kenny/papers/bell.html An explanation of Bell's Theorem] , based on N. D. Mermin's article, "Bringing Home the Atomic World: Quantum Mysteries for Anybody," Am. J. of Phys. 49 (10), 940 (October 1981) * [http://www.ipod.org.uk/reality/reality_entangled.asp Quantum Entanglement] Includes a simple explanation of Bell's Inequality. * [http://xstructure.inr.ac.ru/x-bin/theme3.py?level=2&index1=369244 Bell's theorem on arxiv.org] Wikimedia Foundation. 2010. ### Look at other dictionaries: • Bell's theorem — Bell s Interconnectedness theorem, proved by the physicist John Bell in 1964, asserts that no local model of reality can do justice to the facts of quantum behaviour. A local model of reality is one in which all causal connections propagate by… …   Philosophy dictionary • bell's theorem — noun Usage: usually capitalized B Etymology: after John Stewart Bell died 1990 Irish physicist : a theorem in quantum physics: two particles that have interacted will continue to influence each other instantaneously following separation …   Useful english dictionary • Bell-Ungleichungen — Die Bellsche Ungleichung ist eine Schranke an Mittelwerte von Messwerten, die 1964 von John Bell angegeben wurde. Die Ungleichung gilt in allen physikalischen Theorien, die real und lokal sind und in denen man unabhängig vom zu vermessenden… …   Deutsch Wikipedia • Bell test experiments — The Bell test experiments serve to investigate the validity of the entanglement effect in quantum mechanics by using some kind of Bell inequality. John Bell published the first inequality of this kind in his paper On the Einstein Podolsky Rosen… …   Wikipedia • Bell series — In mathematics, the Bell series is a formal power series used to study properties of arithmetical functions. Bell series were introduced and developed by Eric Temple Bell.Given an arithmetic function f and a prime p, define the formal power… …   Wikipedia • Bell polynomials — In combinatorial mathematics, the Bell polynomials, named in honor of Eric Temple Bell, are a triangular array of polynomials given by the sum extending over all sequences j1, j2, j3, ..., jn−k+1 of non negative integers such that …   Wikipedia • Bell , John Stuart — (1928–1990) British physicist Born into a poor family in the Northern Irish capital of Belfast, Bell was encouraged by his mother to continue his education after leaving school at sixteen. Consequently, after working for a year as a laboratory… …   Scientists • Bell, Eric Temple — ▪ American mathematician born February 7, 1883, Aberdeen, Aberdeenshire, Scotland died December 21, 1960, Watsonville, California, U.S.       Scottish American mathematician, educator, and writer who made significant contributions to analytic… …   Universalium • Kochen-Specker theorem — In quantum mechanics, the Kochen Specker (KS) theorem [S.Kochen and E.P. Specker, The problem of hidden variables inquantum mechanics , Journal of Mathematics and Mechanics 17, 59 87 (1967).] is a no go theorem provedby Simon Kochen and Ernst… …   Wikipedia • Bellsches Theorem — Die Bellsche Ungleichung ist eine Schranke an Mittelwerte von Messwerten, die 1964 von John Bell angegeben wurde. Die Ungleichung gilt in allen physikalischen Theorien, die real und lokal sind und in denen man unabhängig vom zu vermessenden… …   Deutsch Wikipedia
proofpile-shard-0030-319
{ "provenance": "003.jsonl.gz:320" }
# Cylinder container The cylindrical container with a diameter of 1.8 m contains 2,000 liters of water. How high does the water reach? Correct result: h =  0.786 m #### Solution: $D=1.8 \ \text{m} \ \\ V=2000 \ l \rightarrow m^3=2000 / 1000 \ m^3=2 \ m^3 \ \\ \ \\ r=D/2=1.8/2=\dfrac{ 9 }{ 10 }=0.9 \ \text{m} \ \\ S=\pi \cdot \ r^2=3.1416 \cdot \ 0.9^2 \doteq 2.5447 \ \text{m}^2 \ \\ \ \\ V=S h \ \\ \ \\ h=V/S=2/2.5447=0.786 \ \text{m}$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Please write to us with your comment on the math problem or ask something. Thank you for helping each other - students, teachers, parents, and problem authors. Tips to related online calculators Do you know the volume and unit volume, and want to convert volume units? #### You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: ## Next similar math problems: • Liters in cylinder Determine the height at which level 24 liters of water in a cylindrical container having a bottom diameter 36 cm. • Half-filled A cylindrical pot with a diameter of 24 cm is half-filled with water. How many centimeters will the level rise if we add a liter of water to it? • A cylindrical tank A cylindrical tank can hold 44 cubic meters of water. If the radius of the tank is 3.5 meters, how high is the tank? • Liquid How much liquid does the rotary cylinder tank hold with base diameter 1.2m and height 1.5m? • Diameter of a cylinder I need to calculate the cylinder volume with a height of 50 cm and a diameter of 30 cm. • The pot The pot is in 1/3 filled with water. Bottom of the pot has an area of ​​329 cm2. How many centimeters rises water level in the pot after add 1.2 liters of water? • Height of the cylinder The cylinder volume is 150 dm cubic, the base diameter is 100 cm. What is the height of the cylinder? • Circular pool The 3.6-meter pool has a depth of 90 cm. How many liters of water is in the pool? • Bottle A company wants to produce a bottle whose capacity is 1.25 liters. Find the dimensions of a cylinder that will be required to produce this 1.25litres if the hight of the cylinder must be 5 times the radius. • Cylindrical tank 2 If a cylindrical tank with volume is used 12320cm raised to the power of 3 and base 28cm is used to store water. How many liters of water can it hold? • Juice box The juice box has a volume of 200ml with its base is an isosceles triangle with sides a = 4,5cm and a height of 3,4cm. How tall is the box? • The shop The shop has 3 hectoliters of water. How many liter bottles is it? • A swiming A swiming pool holds 30000lt of water. How many gallons does it hold? 1 gallon= 4.55lt • Hotel The hotel has a p floors each floor has i rooms from which the third are single and the others are double. Represents the number of beds in hotel. • Expression Solve for a specified variable: P=a+4b+3c, for a • Simplify Simplify the following problem and express as a decimal: 5.68-[5-(2.69+5.65-3.89) /0.5] • Third dimension Calculate the third dimension of the cuboid: a) V = 224 m3, a = 7 m, b = 4 m b) V = 216 dm3, a = 9 dm, c = 4 dm
proofpile-shard-0030-320
{ "provenance": "003.jsonl.gz:321" }
Corrected and searchable version of Google books edition Latest Tweets We have listed many reasons hear why you should never trust Boots.  Here are the previous ones. This post is about a "functional food".  That is about something a bit more serious than homeopathy, though I’ll return to that standing joke in the follow-up, because of Boots’ latest shocking admission.. Alternative medicine advocates love to blame Big Pharma for every criticism of magic medicine.  In contrast, people like me, Ben Goldacre and a host of others have often pointed out that the differences seem to get ever smaller between the huge Alternative industry (about $60 billion per year), and the even huger regular pharmaceutical industry (around$600 billion per year), Boots are as good an example as any.  While representing themselves as ethical pharmacists, they seem to have no compunction at all in highly deceptive advertising of medicines and supplements which are utterly useless rip-offs. The easiest way to make money is to sell something that is alleged to cure a common, but ill-defined problem, that has a lot of spontaneous variability.. Like stress, for example. The Times carried a piece Is Boots’s new Lactium pill the solution to stress?. Needless to say the question wasn’t answered.  It was more like an infomercial than serious journalism.  Here is what Boots say. What does it do? This product contains Lactium, a unique ingredient which is proven to help with the stresses of every day life, helping you through a stressful day. Also contains B vitamins, magnesium and vitamin C, which help to support a healthy immune system and energy levels. Why is it different? This one a day supplement contains the patented ingredient Lactium. All Boots vitamins and suppliers are checked to ensure they meet our high quality and safety standards. So what is this "unique ingredient", Lactium?  It is a produced by digestion of cow’s milk with trypsin. It was patented in 1995 by the French company, Ingredia, It is now distributed in the USA and Canada by Pharmachem. which describes itself as “a leader in the nutraceutical industry.”  Drink a glass of milk and your digestive system will make it for you.  Free.  Boots charge you £4.99 for only seven capsules. What’s the evidence? The search doesn’t start well. A search of the medical literature with Pubmed for "lactium" produces no results at all. Search for "casein hydrolysate" gives quite a lot, but "casein hydrolysate AND stress" gives only seven, of which only one looks at effects in man, Messaoudi M, Lefranc-Millot C, Desor D, Demagny B, Bourdon L. Eur J Nutr. 2005. There is a list of nineteen "studies" on the Pharmachem web site That is where Boots sent me when I asked about evidence, so let’s take a look. Of the nineteen studies, most are just advertising slide shows or unpublished stuff. Two appear to be duplicated. There are only two proper published papers worth looking at, and one of these is in rats not man.  The human paper first. Paper 1  Effects of a Bovine Alpha S1-Casein Tryptic Hydrolysate (CTH) on Sleep Disorder in Japanese General Population, Zara de Saint-Hilaire, Michaël Messaoudi, Didier Desor and Toshinori Kobayashi [reprint here]   The authors come from France, Switzerland and Japan. This paper was published in The Open Sleep Journal, 2009, 2, 26-32, one of 200 or so open access journals published by Bentham Science Publishers. It has to be one of the worst clinical trials that I’ve encountered.  It was conducted on 32 subjects, healthy Japanese men and women aged 25-40 and had reported sleeping disorders.  It was double blind and placebo controlled, so apart from the fact that only 12 of the 32 subjects were in the control group, what went wrong? The results were assessed as subjective sleep quality using the Japanese Pittsburg Sleep Quality Index (PSQI-J).  This gave a total .score and seven component scores: sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleeping medication, and daytime dysfunction. In the results section we read, for total PSQI score "As shown in Table 2, the Mann-Whitney U-test did not show significant differences between CTH [casein tryptic hydrolysate] and Placebo groups in PSQI-J total scores at D0 (U=85; NS), D14 (U=86.5; NS), D28 (U=98.5; NS) and D35 (U=99.5; NS)." Then we read exactly similar statements for the seven component scores.  For example,. for Sleep Quality As shown in Table 3, the Mann-Whitney U-test did not show significant differences between the sleep quality scores of CTH and Placebo groups at D0 (U=110.5; NS), D14 (U=108.5; NS), D28 (U=110; NS) and D35 (U=108.5; NS). The discussion states "The comparisons between the two groups with the test of Mann-Whitney did not show significant differences, probably because of the control product’s placebo effect. Despite everything, the paired comparisons with the test of Wilcoxon show interesting effects of CTH on sleep disorders of the treated subjects. " Aha, so those pesky controls are to blame! But despite this negative result the abstract of the paper says "CTH significantly improves the PSQI total score of the treated subjects. It particularly improves the sleep quality after two weeks of treatment, decreases the sleep latency and the daytime dysfunction after four weeks of treatment. Given the antistress properties of CTH, it seems possible to relate the detected improvement of sleep aspects to a reduction of stress following its’ chronic administration." So there seems to be a direct contradiction between the actual results and the announced outcome of the trial. How could this happen?  The way that the results are presented make it hard to tell.  As far as I can tell, the answer is that, having failed to find evidence of real differences between CTH and placebo, the authors gave up on the placebo control and looked simply at the change from the day 0 basleine values within the CTH group and, separately, within the placebo group.  Some of these differences did pass statistical significance but if you analyse it that way. there is no point in having a control group at all. How on earth did such a poor paper get published in a peer-reviewed journal?  One answer is that there are now so many peer-reviewed journals, that just about any paper, however poor, can get published in some journal that describes itself as ‘peer-reviewed’.  At the lower end of the status hierarchy, the system is simply broken. Bentham Science Publishers are the publishers of the The Open Sleep Journal. (pity they saw fit to hijack the name of UCL’s spiritual founder, Jeremy Bentham). They publish 92 online and print journals, 200 plus open access journals, and related print/online book series. This publsher has a less than perfect reputation.  There can be no scientist of any age or reputation who hasn’t had dozens of emails begging them to become editors of one or other of their journals or to write something for them. They have been described as a "pyramid scheme” for open access.  It seems that every Tom, Dick and Harry has been asked.  They have been described under the heading Black sheep among Open Access Journals and Publishers.  More background can be found at Open Access News.. Most telling of all, a spoof paper was sent to a Bentham journal, The Open Information Science Journal.  . There is a good account of the episode the New Scientist, under the title “CRAP paper accepted by journal”.  It was the initiative if a graduate student at Cornell University. After getting emails from Bentham, he said “”It really painted a picture of vanity publishing”. The spoof paper was computer-generated rubbish, but it was accepted anyway, without comment.  Not only did it appear that is was never reviewed but the editors even failed to notice that the authors said the paper came from the "Center for Research in Applied Phrenology", or CRAP.  .The publication fee was \$800, to be sent to a PO Box in the United Arab Emirates. Having made the point, the authors withdrew the paper. Paper 5 in the list of nineteen stidies is also worth a look.  It’s about rats not humans but it is in a respectable journal The FASEB Journal Express Article doi:10.1096/fj.00-0685fje (Published online June 8, 2001) [reprint here]. Characterization of α-casozepine, a tryptic peptide from bovine αs1-casein with benzodiazepine-like activity. Laurent Miclo et al. This paper provides the basis for the claim that digested milk has an action like the benzodiazepine class of drugs, which includes diazepam (Valium).  The milk hydrolysate, lactium was tested in rats and found to have some activity in tests that are alleged to measure effects on anxiety (I haven’t looked closely at the data, since the claims relate to humans)..  The milk protein, bovine αS1 casein contains 214 amino acids.  One of the many products of its digestion is a 10-amino-acid fragment (residues 91 -100) known as α-casozepine and this is the only product that was found to have an affinity for the γ-amino-butyric acid (GABA) type A receptors, which is where benzodiazepines are thought to act.  There are a few snags with this idea. • The affinity of α-casozepine peptide had 10,000-fold lower affinity for the benzodiazepine site of the GABAA than did diazepam, whereas allegedly the peptide was 10-fold more potent than diazepam in one of the rat tests. • The is no statement anywhere of how much of the α-casozepine peptide is present in the stuff sold my Boots, or whether it can be absorbed • And if digested milk did act like diazepam, it should clearly be callled a drug not a food. Here is what I make of it. Does it relieve stress?  The evidence that it works any better than drinking a glass of milk is negligible. Tha advertising is grossly misleading and the price is extortionate. Corruption of science.  There is a more interesting aspect than that though.  The case of lactium isn’t quite like the regular sort of alternative medicine scam.  It isn’t inherently absurd, like homeopathy.  The science isn’t the sort of ridiculous pseudo-scientific ramblings of magic medicine advocates who pretend it is all quantum theory The papers cited here are real papers, using real instruments and published in real journals, What is interesting about that is that they show very clearly the corruption of real science that occurs at its fringes,  This is science in the service of the dairy industry and in the service of the vast supplements industry.  These are people who want to sell you a supplement for everything. Medical claims are made for supplements, yet loopholes in the law are exploited to maintain that they are foods not drugs.  The law and the companies that exploit it are deeply dishonest.  That’s bad enough. but the real tragedy is when science itself is corrupted in the service of sales. Big Pharma and the alternative industry. Nowhere is the slose alliance between Big Pharma and the alternative medicine industry more obvious than in the supplement and nutriceutical markets. Often the same companies run both. Their aim is to sell you thinks that you don’t need, for conditions that you may well not have, and to lighten your wallet in the process. Don’t believe for a moment that the dark-suited executives give a bugger about your health. You are a market to be exploited. If you doubt that, look from time to time at one of the nutraceutical industry web sites, like nutraingredients.com. They even have a bit to say about lactium.  They are particularly amusing at the moment because the European Food Safety Authority (EFSA) has had the temerity to demand that when health claims are made for foods, there is actually some small element of truth in the claims.  The level of righteous indignation caused in the young food industry executives at the thought that they might have to tell the truth is everywhere to see. For example, try Life in a European health claims wasteland.  Or, more relevant to Lactium, Opportunity remains in dairy bioactives despite departures. Here’s a quotation from that one. “Tage Affertsholt, managing partner at 3A Business Consulting, told NutraIngredients.com that the feedback from industry is that the very restrictive approach to health claims adopted by the European Food Safety Authority (EFSA) will hamper growth potential.” “Affertsholt said: “Some companies are giving up and leaving the game to concentrate on more traditional dairy ingredients.” Science and government policy It may not have escaped your notice that the sort of low grade, corrupted, fringe science described here, is precisely the sort that is being encouraged by government policies. You are expected to get lots of publications, so never mind the details, just churn ’em out;  The hundreds of new journals that have been created will allow you to get as meny peer-reviwed publications as you want without too much fuss, and you can very easily put an editorship of one of them on your CV when you fill in that bit about indicators of esteem.  The box tickers in HR will never know that it’s a mickey mouse journal. Follow-up Boots own up to selling crap Although this post was nothing to do with joke subjects like homeopathy, it isn’t possible to write about Boots without mentioning the performance of their  professional standards director, Paul Bennett, when he appeared before the Parliamentary Select Committee for Science and Technology..  This committee was holding an “evidence check” session on homeopathy (it’s nothing short of surreal that this should be happening in 2009, uh?).  The video can be seen here, and an uncorrected transcript.   It is quite fun in places.  You can also read the written evidence that was submitted. Even the Daily Mail didn’t misss this one. Fioana Macrae wrote Boots boss admits they sell homeopathic remedies ‘because they’re popular, not because they work’ “It could go down as a Boot in Mouth moment. Yesterday, the company that boasts shelf upon shelf of arnica, St John’s wort, flower remedies and calendula cream admitted that homeopathy doesn’t necessarily work. But it does sell. Which according to Paul Bennett, the man from Boots, is why the pharmacy chain stocks such products in the first place. Mr Bennett, professional standards director for Boots, told a committee of MPs that there was no medical evidence that homeopathic pills and potions work. ‘There is certainly a consumer demand for these products,’ he said. ‘I have no evidence to suggest they are efficacious. ‘It is about consumer choice for us and a large number of our customers believe they are efficacious.’ His declaration recalls Gerald Ratner’s infamous admission in 1991 that one of the gifts sold by his chain of jewellers was ‘total crap’.” The Times noticed too, with Boots ‘labels homeopathy as effective despite lack of evidence‘. Now you know that you can’t trust Boots. You heard it from the mouth of their professional standards director. A commentary on the meeting by a clinical scientist summed up Bennett’s contribution thus "Paul Bennett from Boots had to admit that there was no evidence, but regaled the committee with the mealy-mouthed flannel about customer choice that we have come to expect from his amoral employer." Well said The third session of the Scitech evidence check can be seen here, and the uncorrected transcript is here.  It is, in a grim way, pure comedy gold, More of that later. We have often had cause to criticise Boots Alliance, the biggest retail pharmacist in the UK, because of its deeply unethical approach to junk medicine. Click here to read the shameful litany. The problem of Boots was raised recently also by Edzard Ernst at the Hay Literary Festival. He said “The population at large trusts Boots more than any other pharmacy, but when you look behind the smokescreen, when it comes to alternative medicines, that trust is not justified.” Ernst accused Boots of breaching ethical guidelines drawn up by the Royal Pharmaceutical Society of Great Britain, by failing to tell customers that its homeopathic medicines contain no active ingredients and are ineffective in clinical trials. Another chain, Lloyds Pharmacy, are just as bad. Many smaller pharmacies are no more honest when it comes to selling medicines that are known to be ineffective. Pharmacists are fond of referring to themselves as “professionals” who are regulated by a professional body, the Royal Pharmaceutical Society of Great Britain (RPSGB). It’s natural to ask where their regulatory body stands on the question of junk medicine. So I asked them, and this is what I found. 17 April, 2008 I am writing an article about the role of pharmacists in giving advice about (a) alternative medicines and (b) nutritional supplements. I can find no clear statements about these topics on the RPSGB web site. Please can you give me a statement on the position of the Royal Pharmaceutical Society on these two topics. In particular, have you offered guidance to pharmacists about how to deal with the conflict of interest that arises when they can make money by selling something that they know to have no good evidence for efficacy? This question has had some publicity recently in connection with Boots’ promotion of CCoQ10 to give you “energy”, and only yesterday when the bad effects of some nutritional supplements were in the news. Here are some extracts from the first reply that I got from the RPSGB’s Legal and Ethical Advisory Service (emphasis is mine). 28 April 2008 Pharmacists must comply with the Code of Ethics and its supporting documents. Principle 5 of the Code of Ethics requires pharmacists to develop their professional knowledge and competence whilst Principle 6 requires pharmacists to be honest and trustworthy. The Code states: 5. DEVELOP YOUR PROFESSIONAL KNOWLEDGE AND COMPETENCE At all stages of your professional working life you must ensure that your knowledge, skills and performance are of a high quality, up to date and relevant to your field of practice. You must: 5.1 Maintain and improve the quality of your work by keeping your knowledge and skills up to date, evidence-based and relevant to your role and responsibilities. 5.2 Apply your knowledge and skills appropriately to your professional responsibilities. 5.3 Recognise the limits of your professional competence; practise only in those areas in which you are competent to do so and refer to others where necessary. 5.4 Undertake and maintain up-to-date evidence of continuing professional development relevant to your field of practice. 6. BE HONEST AND TRUSTWORTHY Patients, colleagues and the public at large place their trust in you as a pharmacy professional. You must behave in a way that justifies this trust and maintains the reputation of your profession. You must: 6.1 Uphold public trust and confidence in your profession by acting with honesty and integrity. 6.2 Ensure you do not abuse your professional position or exploit the vulnerability or lack of knowledge of others. 6.3 Avoid conflicts of interest and declare any personal or professional interests to those who may be affected. Do not ask for or accept gifts, inducements, hospitality or referrals that may affect, or be perceived to affect, your professional judgement. 6.4 Be accurate and impartial when teaching others and when providing or publishing information to ensure that you do not mislead others or make claims that cannot be justified. And, on over-the counter prescribing In addition the “Professional Standards and Guidance for the Sale and Supply of Medicines” document which supports the Code of Ethics states: “2. SUPPLY OF OVER THE COUNTER (OTC) MEDICINES STANDARDS When purchasing medicines from pharmacies patients expect to be provided with high quality, relevant information in a manner they can easily understand. You must ensure that: 2.1 procedures for sales of OTC medicines enable intervention and professional advice to be given whenever this can assist the safe and effective use of medicines. Pharmacy medicines must not be accessible to the public by self-selection. Evidence-based? Accurate and impartial? High quality information? Effective use? These words don’t seem to accord with Boots’ mendacious advertisements for CoQ10 (which were condemned by the ASA). Neither does it accord with the appalling advice that I got from a Boots pharmacist about Vitamin B for vitality. Or the unspeakable nonsense of the Boots (mis)-education web site. Then we get to the nub. This is what I was told by the RPSGB about alternative medicine (the emphasis is mine). 8. COMPLEMENTARY THERAPIES AND MEDICINES STANDARDS You must ensure that you are competent in any area in which you offer advice on treatment or medicines. If you sell or supply homoeopathic or herbal medicines, or other complementary therapies, you must: 8.1 assist patients in making informed decisions by providing them with necessary and relevant information. 8.2 ensure any stock is obtained from a reputable source. 8.3 recommend a remedy only where you can be satisfied of its safety and quality, taking into account the Medicines and Healthcare products Regulatory Agency registration schemes for homoeopathic and herbal remedies.” Therefore pharmacists are required to keep their knowledge and skills up to date and provide accurate and impartial information to ensure that you do not mislead others or make claims that cannot be justified. It does seem very odd that “accurate and impartial information” about homeopathic pills does not include mentioning that they contain no trace of the ingredient on the label. and have been shown in clinical trials to be ineffective. These rather important bits of information are missing from both advertisements and from (in my experience) the advice given by pharmacists in the shop. If you look carefully, though, the wording is a bit sneaky. Referring to over-the-counter medicines, the code refers to “safe and effective use of medicines”, but when it comes to alternative medicines, all mention of ‘effectiveness’ has mysteriously vanished. So I wrote again to get clarification. 29 April, 2008 Thanks for that information. I’d appreciate clarification of two matters in what you sent. (1) Apropros of complementary and alternative medicine, the code says 8.3 recommend a remedy only where you can be satisfied of its safety and quality I notice that this paragraph mentions safety and quality but does not mention efficacy. Does this mean that it is considered ethical to recommend a medicine when there is no evidence of its efficacy? Apparently it does. This gets to the heart of my question and I’d appreciate a clear answer. This enquiry was followed by a long silence. Despite several reminders by email and by telephone nothing happened until eventually got a phone call over a month later (May 3) from David Pruce, Director of Practice & Quality Improvement, Royal Pharmaceutical Society of Great Britain. The question may be simple, but the RPSGB evidently it hard, or more likely embarrassing, to answer. When I asked Pruce why para 8.3 does not mention effectiveness, his reply, after some circumlocution, was as follows. Pruce: “You must assist patients in making informed decisions by providing necessary and relevant information . . . we would apply this to any medicine, the pharmacist needs to help the patient assess the risks and benefits.” DC: “and would that include saying it doesn’t work better than placebo?” Pruce “if there is good evidence to show that it ???????? ????? ????????may, but it depends on what the evidence is, what the level of evidence is, and the pharmacist’s assessment of the evidence” DC “What’s your assessment of the evidence?” Pruce, “I don’t think my personal assessment is relevant. I wouldn’t want to be drawn on my personal assessment”. “If a pharmacist is selling homeopathic medicines they have to assist the patient in making informed decisions” “I don’t think we specifically talk about the efficacy of any other medicine” [DC: not true, see para 2.1, above] We would expect pharmacists to be making sure that what they are providing to a patient is safe and efficacious DC “So why doesn’t it mention efficacious in para 8.3” Pruce “What we are trying to do with the Code of Ethics is not go down to the nth degree of detail ” . . . “there are large areas of medicine where there is an absence of data” DC “Yes, actually homeopathy isn’t one of them. It used to be.” Pruce. “uh, that’s again a debatable point” DC I don’t think it’s debatable at all, if you’ve read the literature Pruce. “well many people would debate that point” “This [homeopathy] is a controversial area where opinions are divided on it” DC “Not informed opinions” Pruce “Well . . . there are also a large number of people that do believe in it. We haven’t come out with a categorical statement either way.” I came away from this deeply unsatisfactory conversation with a strong impression that the RPSGB’s Director of Practice & Quality Improvement was either not familiar with the evidence, or had been told not to say anything about it, in the absence of any official statement about alternative medicine. I do hope that the RPSGB does not really believe that “there are also a large number of people that do believe in it” constitutes any sort of evidence. It is high time that the RPSGB followed its own code of ethics and required, as it does for over-the-counter sales, that accurate advice should be given about “the safe and effective use of medicines”. “The scientist on the High Street” The RPS publishes a series of factsheets for their “Scientist in the High Street” campaign. One of these “factsheets” concerns homeopathy, [download pdf from the RPSGB]. Perhaps we can get an answer there? Well not much. For the most part the “factsheet” just mouths the vacuous gobbledygook of homeopaths. It does recover a bit towards the end, when it says “The methodologically “best” trials showed no effect greater than that of placebo”. But there is no hint that this means pharmacists should not be selling homeopathic pills to sick people.. That is perhaps not surprising, because the Science Committee of the RPSGB copped out of their responsibility by getting the factsheet written by a Glasgow veterinary homeopath, Steven Kayne. You can judge his critical attitude by a paper (Isbell & Kayne, 1997) which asks whether the idea that shaking a solution increases its potency. The paper is a masterpiece of prevarication, it quotes only homeopaths and fails to come to the obvious conclusion. And it is the same Steven Kayne who wrote in Health and Homeopathy (2001) “Homeopathy is not very good for treating bacterial infections directly, apart from cystitis that often responds to a number of medicines, including Berberis or Cantharis”. So there is a bacterial infection that can be cured by pills that contain no medicine? Is this dangerous nonsense what the RPSGB really believes? While waiting for the train to Cardiff on April 16th (to give a seminar at the Welsh School of Pharmacy), I amused myself by dropping into the Boots store on Paddington station. DC I’ve seen your advertisements for CoQ10. Can you tell me more? Will they really make me more energetic? Boots: Yes they will, but you may have to take them for several weeks. DC. Several weeks? Boots: yes the effect develops only slowly Peers at the label and reads it out to me DC I see. Can you tell me whether there have been any trials that show it works? Boots. I don’t know. I’d have to ask. But there must be or they wouldn’t be allowed to sell it. DC. Actually there are no trials, you know Boots. Really? I didn’t think that was allowed. But people have told me that they feel better after taking it. DC You are a pharmacist? Boots. Yes Sadly, this abysmal performance is only too typical in my experience, Try it yourself. The malaria question After it was revealed that pharmacists were recommending, or tolerating recommendations, of homeopathic treatment of malaria, the RPSGB did, at last. speak out. It was this episode that caused Quackometer to write his now famous piece on ‘The gentle art of homeopathic killing‘ (it shot to fame when the Society of Homeopaths tried to take legal action to ban it) Recommending pills that contain no medicine for the treatment or prevention or treatment of malaria is dangerous. If it is not criminal it ought to be [watch the Neals Yard video]. . The RPSGB says it is investigating the role of pharmacists in the Newsnight sting (see the follow-up here). That was in July 2006, but they are stlll unwilling to say if any action will be taken. Anyone want to bet that it will be swept under the carpet? The statement issued by the RPSGB, 5 months after the malaria sting is just about the only example that I can find of them speaking out against dangerous and fraudulent homeopathic practices. Even in this case, it is pretty mild and restricted narrowly to malaria prevention. The RPSGB and the Quacktioner Royal The RPSGB submitted a response to the ‘consultation’ held by the Prince’s Foundation for Integrated Health, about their Complementary Healthcare; a guide for patients. Response by the Royal Pharmaceutical Society of Great Britain Dr John Clements, Science Secretary “We believe that more emphasis should be given to the need for members of the public who are purchasing products (as opposed to services) to ask for advice about the product. Pharmacists are trained as experts on medicines and the public, when making purchases in pharmacies, would expect to seek advice from pharmacists” So plenty of puffery for the role of pharmacists. But there is not a word of criticism about the many barmy treatments that are included in the “Guide for Patients”. Not just homeopathy and herbalism, but also Craniosacral therapy, Laying on of Hands, chiropractic, Reiki, Shiatsu –every form of barminess under the sun drew no comment from the RPS. I can’t see how a response like this is consistent with the RPS’s own code of ethics. A recent president of the RPSGB was a homeopath Christine Glover provides perhaps the most dramatic reason of all for thinking thst, despite all the fine words, the RPSGB cares little for evidence and truth The NHS Blogdoctor published “Letter from an angry pharmacist”. Mrs Glover was president of the RPSGB from 1999 to 2001, vice-president in 1997-98, and a member of the RPSGB Council until May 2005. She is not just a member, but a Fellow. (Oddly, her own web site says President from 1998 – 2001.) So it is relevant to ask how the RPSGB’s own ex-president obeys their code of ethics. Here are some examples on how Ms Glover helps to assist the safe and effective use of medicines. . Much of her own web site seems to have vanished (I wonder why) so I’ll have to quote the “Letter from an angry pharmacist”., as revealed by NHS Blogdoctor, “What has Christine got to offer? “We offer a wide range of Homeopathic remedies (over 3000 different remedies and potencies) as well as Bach flower remedies, Vitamins, Supplements, some herbal products and Essential Oils.” Jetlag Tablets highly recommended in ‘Wanderlust’ travel magazine. Suitable for all ages. Wind Remedy useful for wind particularly in babies. In can be supplied in powder form for very small babies. Granules or as liquid potency. Udder Care 100ml £80.00 One capful in sprayer filled with water. Two jets to be squirted on inner vulva twice daily for up to 4 days until clots reduced. Discard remainder. Same dose for high cell-counting cows detected. Udder Care? Oh! I forgot to say, “Glover’s Integrated Healthcare” does cows as well as people. Dr Crippen would not suggest to a woman with sore breasts that she sprayed something on her inner vulva. But women are women and cows are cows and Dr Crippen is not an expert on bovine anatomy and physiology. But, were he a farmer, he would need some persuasion to spend £80.00 on 100 mls of a liquid to squirt on a cow’s vulva. Sorry, inner vulva.” Nothing shows more clearly that the RPSGB will tolerate almost any quackery than the fact that they think Glover is an appropriate person to be president. Every item on the quotation above seems to me to be in flagrant breach of the RPSGB’s Code of Ethics. Just like the Society of Homeopaths, the code seems to be there merely for show, at least in the case of advice about junk medicine.. A greater role for pharmacists? This problem has become more important now that the government proposes to give pharmacists a greater role in prescribing. Needless to say the RPSGB is gloating about their proposed new role. Other people are much less sure it is anything but a money–saving gimmick and crypto-privatisation. I have known pharmacists who have a detailed knowledge of the actions of drugs, and I have met many more who haven’t. The main objection, though, is that pharmacists have a direct financial interest in their prescribing. Conflicts of interest are already rife in medicine, and we can’t afford them. Conclusion The Royal Pharmaceutical Society is desperately evasive about a matter that is central to their very existence, giving good advice to patients about which medicines work and which don’t. Pharmacists should be in the front line in education of the public, about medicines, the ‘scientist on the High Street’. Some of them are, but their professional organisation is letting them down badly. Until such time as the RPSGB decides to take notice of evidence, and clears up some of the things described here, it is hard to see how they can earn the respect of pharmacists, or of anyone else. Follow-up Stavros Isaiadis’ blog, Burning Mind, has done a good piece on “More on Quack Medicine in High Street Shops“. The Chemist and Druggist reports that the RPSGB is worried about the marketing of placebo pills (‘obecalp’ -geddit?). It does seem very odd that the RPSGB should condemn honest placebos, but be so very tolerant about dishonest placebos. You couldn’t make it up. A complaint to the RPSGB is rejected Just to see what happened, I made a complaint to thr RPSGB about branches of their own Code of Ethics at Boots in Hexham and in Evesham. Both of them supported Homeopathy Awareness Week These events had been publicised in those particularly unpleasent local ‘newspapers’ that carry paid advertising disguised as editorial material. In this case it was the Evesham Journal and the Hexham Courant. Guess what? The RPSGB replied thus “Your complaint has been reviewed bt Mrs Jill Williams and Mr David Slater who are both Regional Lead Inspectors. Having carried out a review they have concluded that support of homeopathic awareness week does not constitute a breach of the Society’s Code of Ethics or Professional Standards.” In case you have forgotten, the Professional Standards say 2.1 procedures for sales of OTC medicines enable intervention and professional advice to be given whenever this can assist the safe and effective use of medicines. The RPSGB has some very quaint ideas on how to interpret their own code of ethics
proofpile-shard-0030-321
{ "provenance": "003.jsonl.gz:322" }
# Prove that $V^n$ and $\mathcal{L}(\mathbf{F}^n,V)$ are isomorphic vector spaces For $$n$$ positive integer, define $$V^n$$ by $$V^n=\underbrace{V\times...\times V}_{n \ times}$$. Prove that $$V^n$$ and $$\mathcal{L}(\mathbf{F}^n,V)$$ are isomorphic vector spaces. I would like to know if my proof holds and to have a feedback, please. ($$\mathbf{F}$$ denotes a field here) Let $$(v_1,...,v_n)$$ be a basis of $$V$$. So, each element in $$V$$ can be expressed as $$\lambda_1 v_1+...+\lambda_n v_n$$ for $$\lambda_1,...,\lambda_n \in \mathbf{F}$$. Let $$\xi:\mathbf{F}^n\to V$$, $$\xi(\lambda_1,...,\lambda_n)=\lambda_1 v_1+...+\lambda_n v_n$$ and define $$\psi: V^n\to \mathcal{L}(\mathbf{F}^n,V)$$ as $$\psi (\lambda_1 v_1+...+\lambda_n v_n,...,\lambda_1 v_1+...+\lambda_n v_n)=\xi(\lambda_1,...,\lambda_n)$$. Clearly $$\psi$$ is a linear application (it is easy to check). We show now that $$\psi$$ is injective. $$\psi(\lambda_1 v_1+...+\lambda_n v_n,...,\lambda_1 v_1+...+\lambda_n v_n)=\xi(\lambda_1,...,\lambda_n)=\lambda_1v_1+..+\lambda_nv_n=0 \iff \lambda_1=...=\lambda_n=0$$ because $$(v_1,...,v_n)$$ is linearly independent in $$V$$. So, $$\lambda_1 v_1+...+\lambda_n v_n,...,\lambda_1 v_1+...+\lambda_n v_n=0$$ and we conclude that $$\psi$$ is injective. Moreover, the dimension of $$V^n$$ is equal to a dimension of $$\mathcal{L}(\mathbf{F}^n,V)$$. Thus, by fundamental theorem we conclude that $$\psi$$ is surjective. Therefore, $$\psi$$ is an isomorphism • What is $F$ here? Apr 11 '21 at 18:34 • @mathcounterexamples.net just a field. Sorry I did some mistakes in my proof I'm correcting it right now. Apr 11 '21 at 18:34 • $F$ is the field over which the vector space $V$ is defined. Apr 11 '21 at 18:39 • I’m a bit picky. Considering your proof, you’re also making the hypothesis that $V$ is of finite dimension. Apr 11 '21 at 18:41 • You have several issues to fix. First, use different variables for $n$ and the dimension of $V$. As you use the same to denote two things, you’re making confusions. Second, your $\xi$ depends on $(v_1, \dots, v_n)$. You should reflect that in your notations. Apr 11 '21 at 18:53 In fact the result is true whatever the dimension of $$V$$ is. $$\begin{array}{l|rcl} \Phi : & V^n & \longrightarrow & \mathcal L(F^n,V)\\ & (v_1,\dots,v_n) & \longmapsto & (\lambda_1, \dots, \lambda_n) \mapsto \lambda_1v_1+ \dots + \lambda_n v_n\end{array}$$ $$\Phi$$ is linear, injective as its kernel is the set consisting of the zero vector and surjective. • Oh, alright. Thank you. But where do $\lambda_1,...,\lambda_n$ come from if you don't consider a dimension of $V$? And why $(\lambda_1,...,\lambda_n)\to \lambda_1 v_1+...+\lambda_n v_n$ would be well defined in the case if we don't consider the $V's$ dimension? Apr 11 '21 at 18:59 • $\lambda_1, \dots, \lambda_n$ are just variables from $F$. The image of an element of $V^n$ being a map under $\Phi$. Apr 12 '21 at 5:55
proofpile-shard-0030-322
{ "provenance": "003.jsonl.gz:323" }
# Which of the following has the greatest concentration of hydrogen ions? How is this determined? Cola (pH 3) #### Spiegazione: pH is a shorthand numerical value that represents the concentration of hydrogen ions in solution. The scale 0-14 represents the usual range of possible concentrations of protons (H^+) in an aqueous system. Possiamo considerare H^+ to mean the same as H_3O^+ (a water molecule carries il H^+). "pH" is the negative log of hydrogen ion concentration, meaning that: "pH" = -log[H_3O^+] = -log[H^+] and therefore: 10^-"pH" = [H^+] 10^(-3) = 1.0 × 10^(-3)color(white)(l) "mol/L" Let's simplify: "pH"color(white)(m) [H^+] ("mol/L") color(white)(ll)3color(white)(mml)1 × 10^(-3) color(white)(ll)5color(white)(mml)1 × 10^(-5) color(white)(ll)7color(white)(mml)1 × 10^(-7) color(white)(ll)9color(white)(mml)1 × 10^(-9) color(white)(l)11color(white)(mm)1 × 10^(-11) This is counter-intuitive without the background information (i.e. the negative log) but, as you increase pH, you decrease the concentration of H^+. As you decrease the pH, you increase the concentration of H^+.
proofpile-shard-0030-323
{ "provenance": "003.jsonl.gz:324" }
# If Z is a compressibility factor ,van der waals equation at low pressure can be written as : $(a)\;Z=1+\large\frac{RT}{Pb} \qquad(b)\;Z=1-\large\frac{a}{VRT}\qquad(c)\;Z=1-\large\frac{Pb}{RT}\qquad(d)\;Z=1+\large\frac{Pb}{RT}$
proofpile-shard-0030-324
{ "provenance": "003.jsonl.gz:325" }
## Thursday, 18 April 2013 Necessary and Sufficient Conditions for the General Second Degree Equation $$Ax^2+2Hxy+By^2+2Gx+2Fy+C=0$$ to Represent Pair of Straight Lines: $$\Delta =\begin{vmatrix} A & H &G\\ H& B & F\\ G& F& C \end{vmatrix} =0$$ Case 1 :if $$H^2 > AB$$ Then they are Intersecting pair of Straight Lines Case 2: if $$H^2 = AB$$ Then they are pair of Parallel Straight Lines Case 3: if $$H^2 < AB$$ Then they Represent a Point in a Plane
proofpile-shard-0030-325
{ "provenance": "003.jsonl.gz:326" }
Home > Circles > Circle of grass ## Circle of grass There is a circle of grass with the radius R. We want to let a sheep eat the grass from that circle by attaching the sheep’s leash on the edge of the circle. What must be the length of the leash for the sheep to eat exactly half of the grass? This question is taken from physicsforum. Solution : Let the length of the leash be l.  Then $\sqrt{2} R > l > R.$ Categories: Circles Tags: , , ,
proofpile-shard-0030-326
{ "provenance": "003.jsonl.gz:327" }
Here I've got a sum that I have to reorder into a quadratic equation so I can find the two roots of it, but I'm struggling to understand how. Once into the quadratic form, I can easily find the roots, but the problem for me is getting it into the quadratic form. 18/x^4 + 1/x^2 = 4 Now what I did was put the x^4 on the other side. So it's: x^4 - x^-2 - 18 = 0 Is this right? According to the mark scheme, I had to add the two fractions together, but I don't see why that is the right option. Any help? :-) 2. Re: Rearranging to quadratric equation Originally Posted by yorkey Here I've got a sum that I have to reorder into a quadratic equation so I can find the two roots of it, but I'm struggling to understand how. Once into the quadratic form, I can easily find the roots, but the problem for me is getting it into the quadratic form. 18/x^4 + 1/x^2 = 4 Now what I did was put the x^4 on the other side. So it's: x^4 - x^-2 - 18 = 0 Is this right? According to the mark scheme, I had to add the two fractions together, but I don't see why that is the right option. Any help? :-) Not quite. What you need to do is get rid of the fractions. To do this, multiply both sides by the largest order denominator, in this case, \displaystyle \begin{align*} x^4 \end{align*}. This will give \displaystyle \begin{align*} \frac{18}{x^4} + \frac{1}{x^2} &= 4 \\ x^4 \left( \frac{18}{x^4} + \frac{1}{x^2} \right) &= 4x^4 \\ 18 + x^2 &= 4x^4 \\ 0 &= 4x^4 - x^2 - 18 \\ 0 &= 4X^2 - X - 18 \textrm{ if we let } X = x^2 \end{align*} Now solve for \displaystyle \begin{align*} X \end{align*}, and use this to solve for \displaystyle \begin{align*} x\end{align*}. 3. Re: Rearranging to quadratric equation It's strange, I don't know this rule. Sorry to keep you, but could you just lay out the whole "multiply both sides by the lowest order denominator" in a more general way so I can remember it? So: if I have 2 denominators with variables of different exponents, I multiply the entire LHS and the entire RHS by this value, and then simplify. Is that it? 4. Re: Rearranging to quadratric equation Originally Posted by yorkey It's strange, I don't know this rule. Sorry to keep you, but could you just lay out the whole "multiply both sides by the lowest order denominator" in a more general way so I can remember it? So: if I have 2 denominators with variables of different exponents, I multiply the entire LHS and the entire RHS by this value, and then simplify. Is that it? Well the reason is because if you were to try to add the fractions, you need a common denominator. Then once they're added, to simplify so that you can solve the equation, you need to multiply both sides by the denominator. Try it.
proofpile-shard-0030-327
{ "provenance": "003.jsonl.gz:328" }
## C - Bowls and Dishes We have $N$ dishes numbered $1, 2, \dots, N$ and $M$ conditions numbered $1, 2, \dots, M$. Condition $i$ is satisfied when both Dish $A_i$ and Dish $B_i$ have (one or more) balls on them. There are $K$ people numbered $1, 2, \dots, K$. Person $i$ will put a ball on Dish $C_i$ or Dish $D_i$. At most how many conditions will be satisfied? Constraints • All values in input are integers. • $2 ≤ N ≤ 100$ • $1 ≤ M ≤ 100$ • $1 ≤ A_i < B_i ≤ N$ • $1 ≤ K ≤ 16$ • $1 ≤ C_i < D_i ≤ N$ Sample Input 1 Sample Output 1 ## D - Staircase Sequences How many arithmetic progressions consisting of integers with a common difference of $1$ have a sum of $N$? Constraints • $1≤N≤10^12$ • $N$ is an integer. Sample Input 1 Sample Output 1 We have four such progressions: • $[12]$ • $[3, 4, 5]$ • $[-2, -1, 0, 1, 2, 3, 4, 5]$ • $[-11, -10, -9, \dots, 10, 11, 12]$ ## E - Magical Ornament There are $N$ kinds of magical gems, numbered $1, 2, \ldots, N$, distributed in the AtCoder Kingdom. Takahashi is trying to make an ornament by arranging gems in a row. For some pairs of gems, we can put the two gems next to each other; for other pairs, we cannot. We have $M$ pairs for which the two gems can be adjacent: (Gem $A_1$, Gem $B_1$), (Gem $A_2$, Gem $B_2$), $\ldots$, (Gem $A_M$, Gem $B_M$). For the other pairs, the two gems cannot be adjacent. (Order does not matter in these pairs.) Determine whether it is possible to form a sequence of gems that has one or more gems of each of the kinds $C_1, C_2, \dots, C_K$. If the answer is yes, find the minimum number of stones needed to form such a sequence. Constraints • All values in input are integers. • $1 ≤ N ≤ 10^5$ • $0 ≤ M ≤ 10^5$ • $1 ≤ A_i < B_i ≤ N$ • If$i ≠ j, (A_i, B_i) ≠ (A_j, B_j)$. • $1 ≤ K ≤ 17$ • $1 ≤ C_1 < C_2 < \dots < C_K ≤ N$ Sample Input 1 Sample Output 1 ### 解法一 bfs+状态压缩,因为$K$很小,所以我们可以将给定的输入转换成一个双向图,然后将关键点之间的最短路径求出来,这里直接BFS就行了,时间复杂度$O(K*(N+M))$,得到一个$dist[i][j]$,表示第$i$-th关键点和第$j$-th个关键点的最短路径 • 入口:$dp[1 << i][i] = 1$,只选取一个关键宝石,序列长度为1 • 转移:$dp[mask|(1 << j)][j] = \min(dp[mask][i] + dist[i][j])$,枚举所有的选取状态,枚举所有以关键宝石作为结尾的状态,递推求最小值 • 出口:$\min_i(dp[(1 << k)-1][i])$,选取到所有的关键石头,并且以某个关键石头结尾的最小值 ## F - Shift and Inversions Given is a sequence $A = [a_0, a_1, a_2, \dots, a_{N-1}]$ that is a permutation of $0, 1, 2, \dots, N - 1$. For each $k = 0, 1, 2, \dots, N - 1$, find the inversion number of the sequence $B = [b_0, b_1, b_2, \dots, b_{N-1}]$ defined as $b_i = a_{i+k \bmod N}$. Constraints • All values in input are integers. • $2≤N≤3×10^5$ • $a_0,a_1,a_2,…,a_{N−1}$ is a permutation of $0,1,2,…,N−1$. Sample Input 1 Sample Output 1 We have $A = [0, 1, 2, 3]$. • For $k = 0$, the inversion number of $B = [0, 1, 2, 3]$ is $0$. • For $k = 1$, the inversion number of $B = [1, 2, 3, 0]$ is $3$. • For $k = 2$, the inversion number of $B = [2, 3, 0, 1]$ is $4$. • For $k = 3$, the inversion number of $B = [3, 0, 1, 2]$ is $3$.
proofpile-shard-0030-328
{ "provenance": "003.jsonl.gz:329" }
Margin Call Price when Short Selling Calculator A Margin Call occurs when the value of an investor's margin account falls below the broker's required amount. An investor's margin account contains securities bought with borrowed money (typically a combination of the investor's own money and money borrowed from the investor's broker). A margin call refers specifically to a broker's demand that an investor deposit additional money or securities into the account so that it is brought up to the minimum value, known as the maintenance margin. $Margin\ Call\ Price\ =\ {Stock\ Price}\times(\frac{1+(\frac{Initial\ Margin\%}{100})}{1+(\frac{Maintenance\ Margin\%}{100})})$
proofpile-shard-0030-329
{ "provenance": "003.jsonl.gz:330" }
All numbers with specific difference Basic Accuracy: 33.33% Submissions: 30 Points: 1 Given a positive number N and a number D.  Find the count of positive numbers smaller or equal to  N such that the difference between the number and sum of its digits is greater than or equal to given specific value D. Example 1: Input: N = 13 , D = 2 Output: 4 Explanation: There are 4 numbers satisfying the Conditions. These are 10,11,12 and 13. Example 2: Input: N = 14 , D = 3 Output: 5 Explanation: There are 5 numbers satisfying the Conditions. These are 10,11,12,13 and 14. You don't need to read input or print anything. Your task is to complete the function getCount() which takes 2 integers N and D as input and returns the answer. Expected Time Complexity: O(log(N)) Expected Auxiliary Space: O(1) Constraints: 1 <= N <= 1016
proofpile-shard-0030-330
{ "provenance": "003.jsonl.gz:331" }
6.2.1. ModularAnalysis¶ This module defines wrapper functions around the analysis modules. Adds the InclusiveDstarReconstruction module to the given path. This module creates a D* particle list by estimating the D* four momenta from slow pions, specified by a given cut. The D* energy is approximated as E(D*) = m(D*)/(m(D*) - m(D)) * E(pi). The absolute value of the D* momentum is calculated using the D* PDG mass and the direction is collinear to the slow pion direction. The charge of the given pion list has to be consistent with the D* charge Parameters • decayString – Decay string, must be of form D* -> pi • slowPionCut – Cut applied to the input pion list to identify slow pions • DstarCut – Cut applied to the output D* list • path – the module is added to this path Add photon Data/MC detection efficiency ratio weights to the specified particle list Parameters • inputListNames (list(str)) – input particle list names • tableName – taken from database with appropriate name • path (basf2.Path) – module is added to this path Loads the ROE object of a particle and creates a ROE mask with a specific name. It applies selection criteria for tracks and eclClusters which will be used by variables in ROEVariables.cc. • append a ROE mask with all tracks in ROE coming from the IP region appendROEMask('B+:sig', 'IPtracks', '[dr < 2] and [abs(dz) < 5]', path=mypath) • append a ROE mask with only ECL-based particles that pass as good photon candidates goodPhotons = 'inCDCAcceptance and clusterErrorTiming < 1e6 and [clusterE1E9 > 0.4 or E > 0.075]' Parameters • list_name – name of the input ParticleList • trackSelection – decay string for the track-based particles in ROE • eclClusterSelection – decay string for the ECL-based particles in ROE • klmClusterSelection – decay string for the KLM-based particles in ROE • path – modules are added to this path Loads the ROE object of a particle and creates a ROE mask with a specific name. It applies selection criteria for track-, ECL- and KLM-based particles which will be used by ROE variables. The multiple ROE masks with their own selection criteria are specified via list of tuples (mask_name, trackParticleSelection, eclParticleSelection, klmParticleSelection) or (mask_name, trackSelection, eclClusterSelection) in case with fractions. • Example for two tuples, one with and one without fractions ipTracks = ('IPtracks', '[dr < 2] and [abs(dz) < 5]', '', '') goodPhotons = 'inCDCAcceptance and [clusterErrorTiming < 1e6] and [clusterE1E9 > 0.4 or E > 0.075]' goodROEGamma = ('ROESel', '[dr < 2] and [abs(dz) < 5]', goodPhotons, '') goodROEKLM = ('IPtracks', '[dr < 2] and [abs(dz) < 5]', '', 'nKLMClusterTrackMatches == 0') Parameters • list_name – name of the input ParticleList • path – modules are added to this path modularAnalysis.applyChargedPidMVA(particleLists, path, trainingMode, chargeIndependent=False, binaryHypoPDGCodes=(0, 0))[source] Use an MVA to perform particle identification for charged stable particles, using the ChargedPidMVA module. The module decorates Particle objects in the input ParticleList(s) with variables containing the appropriate MVA score, which can be used to select candidates by placing a cut on it. Note The MVA algorithm used is a gradient boosted decision tree (TMVA 4.3.0, ROOT 6.20/04). The module can perform either ‘binary’ PID between input S, B particle mass hypotheses according to the following scheme: • e (11) vs. pi (211) • mu (13) vs. pi (211) • pi (211) vs. K (321) • K (321) vs. pi (211) , or ‘global’ PID, namely “one-vs-others” separation. The latter exploits an MVA algorithm trained in multi-class mode, and it’s the default behaviour. Currently, the multi-class training separates the following standard charged hypotheses: • e (11), mu (13), pi (211), K (321) Warning In order to run the ChargedPidMVA and ensure the most up-to-date MVA training weights are applied, it is necessary to append the latest analysis global tag (GT) to the steering script. Parameters • particleLists (list(str)) – list of names of ParticleList objects for charged stable particles. The charge-conjugate ParticleLists will be also processed automatically. • path (basf2.Path) – the module is added to this path. • trainingMode (Belle2.ChargedPidMVAWeights.ChargedPidMVATrainingMode) – enum identifier of the training mode. Needed to pick up the correct payload from the DB. Available choices: • c_Classification=0 • c_Multiclass=1 • c_ECL_Classification=2 • c_ECL_Multiclass=3 • c_PSD_Classification=4 • c_PSD_Multiclass=5 • c_ECL_PSD_Classification=6 • c_ECL_PSD_Multiclass=7 • chargeIndependent (bool, optional) – use a BDT trained on a sample of inclusively charged particles. • binaryHypoPDGCodes (tuple(int, int), optional) – the pdgIds of the signal, background mass hypothesis. Required only for binary PID mode. modularAnalysis.applyCuts(list_name, cut, path)[source] Removes particle candidates from list_name that do not pass cut (given selection criteria). Example require energetic pions safely inside the cdc applyCuts("pi+:mypions", "[E > 2] and thetaInCDCAcceptance", path=mypath) Warning You must use square braces [ and ] for conditional statements. Parameters • list_name (str) – input ParticleList name • cut (str) – Candidates that do not pass these selection criteria are removed from the ParticleList • path (basf2.Path) – modules are added to this path modularAnalysis.applyEventCuts(cut, path)[source] Removes events that do not pass the cut (given selection criteria). Example continuum events (in mc only) with more than 5 tracks applyEventCuts("[nTracks > 5] and [isContinuumEvent], path=mypath) Warning You must use square braces [ and ] for conditional statements. Parameters • cut (str) – Events that do not pass these selection criteria are skipped • path (basf2.Path) – modules are added to this path modularAnalysis.applyRandomCandidateSelection(particleList, path=None)[source] If there are multiple candidates in the provided particleList, all but one of them are removed randomly. This is done on a event-by-event basis. Parameters • particleList – ParticleList for which the random candidate selection should be applied • path – module is added to this path Creates for each Particle in the given ParticleList a ContinuumSuppression dataobject and makes basf2 relation between them. Parameters • list_name – name of the input ParticleList • path – modules are added to this path modularAnalysis.buildEventKinematics(inputListNames=None, default_cleanup=True, custom_cuts=None, chargedPIDPriors=None, fillWithMostLikely=False, path=None)[source] Calculates the global kinematics of the event (visible energy, missing momentum, missing mass…) using ParticleLists provided. If no ParticleList is provided, default ParticleLists are used (all track and all hits in ECL without associated track). The visible energy missing values are stored in a EventKinematics dataobject. Parameters • inputListNames – list of ParticleLists used to calculate the global event kinematics. If the list is empty, default ParticleLists pi+:evtkin and gamma:evtkin are filled. • fillWithMostLikely – if True, the module uses the most likely particle mass hypothesis for charged particles according to the PID likelihood and the option inputListNames will be ignored. • chargedPIDPriors – The prior PID fractions, that are used to regulate amount of certain charged particle species, should be a list of six floats if not None. The order of particle types is the following: [e-, mu-, pi-, K-, p+, d+] • default_cleanup – if True and either inputListNames empty or fillWithMostLikely True, default clean up cuts are applied • custom_cuts – tuple of selection cut strings of form (trackCuts, photonCuts), default is None, which would result in a standard predefined selection cuts • path – modules are added to this path modularAnalysis.buildEventKinematicsFromMC(inputListNames=None, selectionCut='', path=None)[source] Calculates the global kinematics of the event (visible energy, missing momentum, missing mass…) using generated particles. If no ParticleList is provided, default generated ParticleLists are used. Parameters • inputListNames – list of ParticleLists used to calculate the global event kinematics. If the list is empty, default ParticleLists are filled. • selectionCut – optional selection cuts • path – Path to append the eventKinematics module to. modularAnalysis.buildEventShape(inputListNames=None, default_cleanup=True, custom_cuts=None, allMoments=False, cleoCones=True, collisionAxis=True, foxWolfram=True, harmonicMoments=True, jets=True, sphericity=True, thrust=True, checkForDuplicates=False, path=None)[source] Calculates the event-level shape quantities (thrust, sphericity, Fox-Wolfram moments…) using the particles in the lists provided by the user. If no particle list is provided, the function will internally create a list of good tracks and a list of good photons with (optionally) minimal quality cuts. The results of the calculation are then stored into the EventShapeContainer dataobject, and are accessible using the variables of the EventShape group. The user can switch the calculation of certain quantities on or off to save computing time. By default the calculation of the high-order moments (5-8) is turned off. Switching off an option will make the corresponding variables not available. Warning The user can provide as many particle lists as needed, using also combined particles, but the function will always assume that the lists are independent. If the lists provided by the user contain several times the same track (either with different mass hypothesis, or once as an independent particle and once as daughter of a combined particle) the results won’t be reliable. A basic check for duplicates is available setting the checkForDuplicate flags, but is usually quite time consuming. Parameters • inputListNames – List of ParticleLists used to calculate the event shape variables. If the list is empty the default particleLists pi+:evtshape and gamma:evtshape are filled. • default_cleanup – If True, applies standard cuts on pt and cosTheta when defining the internal lists. This option is ignored if the particleLists are provided by the user. • custom_cuts – tuple of selection cut strings of form (trackCuts, photonCuts), default is None, which would result in a standard predefined selection cuts • path – Path to append the eventShape modules to. • thrust – Enables the calculation of thrust-related quantities (CLEO cones, Harmonic moments, jets). • collisionAxis – Enables the calculation of the quantities related to the collision axis . • foxWolfram – Enables the calculation of the Fox-Wolfram moments. • harmonicMoments – Enables the calculation of the Harmonic moments with respect to both the thrust axis and, if collisionAxis = True, the collision axis. • allMoments – If True, calculates also the FW and harmonic moments from order 5 to 8 instead of the low-order ones only. • cleoCones – Enables the calculation of the CLEO cones with respect to both the thrust axis and, if collisionAxis = True, the collision axis. • jets – Enables the calculation of the hemisphere momenta and masses. Requires thrust = True. • sphericity – Enables the calculation of the sphericity-related quantities. • checkForDuplicates – Perform a check for duplicate particles before adding them. This option is quite time consuming, instead of using it consider sanitizing the lists you are passing to the function. Creates for each Particle in the given ParticleList a RestOfEvent Parameters • target_list_name – name of the input ParticleList • path – modules are added to this path modularAnalysis.buildRestOfEvent(target_list_name, inputParticlelists=None, fillWithMostLikely=True, chargedPIDPriors=None, path=None)[source] Creates for each Particle in the given ParticleList a RestOfEvent dataobject and makes basf2 relation between them. User can provide additional particle lists with a different particle hypothesis like [‘K+:good, e+:good’], etc. Parameters • target_list_name – name of the input ParticleList • inputParticlelists – list of user-defined input particle list names, which serve as source of particles to build the ROE, the FSP particles from target_list_name are automatically excluded from the ROE object • fillWithMostLikely – By default the module uses the most likely particle mass hypothesis for charged particles based on the PID likelihood. Turn this behavior off if you want to configure your own input particle lists. • chargedPIDPriors – The prior PID fractions, that are used to regulate the amount of certain charged particle species, should be a list of six floats if not None. The order of particle types is the following: [e-, mu-, pi-, K-, p+, d+] • path – modules are added to this path modularAnalysis.buildRestOfEventFromMC(target_list_name, inputParticlelists=None, path=None)[source] Creates for each Particle in the given ParticleList a RestOfEvent Parameters • target_list_name – name of the input ParticleList • inputParticlelists – list of input particle list names, which serve as a source of particles to build ROE, the FSP particles from target_list_name are excluded from ROE object • path – modules are added to this path modularAnalysis.calculateDistance(list_name, decay_string, mode='vertextrack', path=None)[source] Calculates distance between two vertices, distance of closest approach between a vertex and a track, distance of closest approach between a vertex and btube. For track, this calculation ignores track curvature, it’s negligible for small distances.The user should use extraInfo(CalculatedDistance) to get it. A full example steering file is at analysis/tests/test_DistanceCalculator.py Example from modularAnalysis import calculateDistance calculateDistance('list_name', 'decay_string', "mode", path=user_path) Parameters • list_name – name of the input ParticleList • decay_string – select particles between the distance of closest approach will be calculated • mode – Specifies how the distance is calculated vertextrack: calculate the distance of closest approach between a track and a vertex, taking the first candidate as vertex, default trackvertex: calculate the distance of closest approach between a track and a vertex, taking the first candidate as track 2tracks: calculates the distance of closest approach between two tracks 2vertices: calculates the distance between two vertices vertexbtube: calculates the distance of closest approach between a vertex and btube trackbtube: calculates the distance of closest approach between a track and btube • path – modules are added to this path modularAnalysis.calculateTrackIsolation(list_name, path, *detectors, use2DRhoPhiDist=False, alias=None)[source] Given a list of charged stable particles, compute variables that quantify “isolation” of the associated tracks. Currently, a proxy for isolation is defined as the 3D distance (or optionally, a 2D distance projecting on r-phi) of each particle’s track to its closest neighbour at a given detector entry surface. Parameters • list_name (str) – name of the input ParticleList. It must be a list of charged stable particles as defined in Const::chargedStableSet. The charge-conjugate ParticleList will be also processed automatically. • path (basf2.Path) – the module is added to this path. • use2DRhoPhiDist (Optional[bool]) – if true, will calculate the pair-wise track distance as the cord length on the (rho, phi) projection. By default, a 3D distance is calculated. • alias (Optional[str]) – An alias to the extraInfo variable computed by the TrackIsoCalculator module. Please note, for each input detector a variable is calculated, and the detector’s name is appended to the alias to distinguish them. • *detectors – detectors at whose entry surface track isolation variables will be calculated. Choose among: “CDC”, “PID”, “ECL”, “KLM” (NB: ‘PID’ indicates TOP+ARICH entry surface.) modularAnalysis.combineAllParticles(inputParticleLists, outputList, cut='', writeOut=False, path=None)[source] Creates a new Particle as the combination of all Particles from all provided inputParticleLists. However, each particle is used only once (even if duplicates are provided) and the combination has to pass the specified selection criteria to be saved in the newly created (mother) ParticleList. Parameters • inputParticleLists – List of input particle lists which are combined to the new Particle • outputList – Name of the particle combination created with this module • cut – created (mother) Particle is added to the mother ParticleList if it passes these given cuts (in VariableManager style) and is rejected otherwise • writeOut – whether RootOutput module should save the created ParticleList • path – module is added to this path modularAnalysis.copyList(outputListName, inputListName, writeOut=False, path=None)[source] Copy all Particle indices from input ParticleList to the output ParticleList. Note that the Particles themselves are not copied. The original and copied ParticleLists will point to the same Particles. Parameters • ouputListName – copied ParticleList • inputListName – original ParticleList to be copied • writeOut – whether RootOutput module should save the created ParticleList • path – modules are added to this path modularAnalysis.copyLists(outputListName, inputListNames, writeOut=False, path=None)[source] Copy all Particle indices from all input ParticleLists to the single output ParticleList. Note that the Particles themselves are not copied. The original and copied ParticleLists will point to the same Particles. Duplicates are removed based on the first-come, first-served principle. Therefore, the order of the input ParticleLists matters. If you want to select the best duplicate based on another criterion, have a look at the function mergeListsWithBestDuplicate. Note Two particles that differ only by the order of their daughters are considered duplicates and one of them will be removed. Parameters • ouputListName – copied ParticleList • inputListName – vector of original ParticleLists to be copied • writeOut – whether RootOutput module should save the created ParticleList • path – modules are added to this path modularAnalysis.copyParticles(outputListName, inputListName, writeOut=False, path=None)[source] Create copies of Particles given in the input ParticleList and add them to the output ParticleList. The existing relations of the original Particle (or it’s (grand-)^n-daughters) are copied as well. Note that only the relation is copied and that the related object is not. Copied particles are therefore related to the same object as the original ones. Parameters • ouputListName – new ParticleList filled with copied Particles • inputListName – input ParticleList with original Particles • writeOut – whether RootOutput module should save the created ParticleList • path – modules are added to this path modularAnalysis.correctBrems(outputList, inputList, gammaList, maximumAcceptance=3.0, multiplePhotons=False, usePhotonOnlyOnce=True, writeOut=False, path=None)[source] For each particle in the given inputList, copies it to the outputList and adds the 4-vector of the photon(s) in the gammaList which has(have) a weighted named relation to the particle’s track, set by the ECLTrackBremFinder module during reconstruction. Warning This can only work if the mdst file contains the Bremsstrahlung named relation. Official MC samples up to and including MC12 and proc9 do not contain this. Newer production campaigns (from proc10 and MC13) do. However, studies by the tau WG revealed that the cuts applied by the ECLTrackBremFinder module are too tight. These will be loosened but this will only have effect with proc13 and MC15. If your analysis is very sensitive to the Bremsstrahlung corrections, it is advised to use correctBremsBelle. Information: A detailed description of how the weights are set can be found directly at the documentation of the BremsFinder module. Please note that a new particle is always generated, with the old particle and -if found- one or more photons as daughters. The inputList should contain particles with associated tracks. Otherwise, the module will exit with an error. The gammaList should contain photons. Otherwise, the module will exit with an error. Parameters • outputList – The output particle list name containing the corrected particles • inputList – The initial particle list name containing the particles to correct. It should already exist. • gammaList – The photon list containing possibly bremsstrahlung photons; It should already exist. • maximumAcceptance – Maximum value of the relation weight. Should be a number between [0,3) • multiplePhotons – Whether to use only one photon (the one with the smallest acceptance) or as many as possible • usePhotonOnlyOnce – If true, each brems candidate is used to correct only the track with the smallest relation weight • writeOut – Whether RootOutput module should save the created outputList • path – The module is added to this path modularAnalysis.correctBremsBelle(outputListName, inputListName, gammaListName, multiplePhotons=True, angleThreshold=0.05, writeOut=False, path=None)[source] Run the Belle - like brems finding on the inputListName of charged particles. Adds all photons in gammaListName to a copy of the charged particle that are within angleThreshold. Tip Studies by the tau WG show that using a rather wide opening angle (up to 0.2 rad) and rather low energetic photons results in good correction. However, this should only serve as a starting point for your own studies because the optimal criteria are likely mode-dependent Parameters • outputListName (str) – The output charged particle list containing the corrected charged particles • inputListName (str) – The initial charged particle list containing the charged particles to correct. • gammaListName (str) – The gammas list containing possibly radiative gammas, should already exist. • multiplePhotons (bool) – How many photons should be added to the charged particle? nearest one -> False, add all the photons within the cone -> True • angleThreshold (float) – The maximum angle in radians between the charged particle and the (radiative) gamma to be accepted. • writeOut (bool) – whether RootOutput module should save the created ParticleList • path (basf2.Path) – modules are added to this path modularAnalysis.correctEnergyBias(inputListNames, tableName, path=None)[source] Scale energy of the particles according to the scaling factor. If the particle list contains composite particles, the energy of the daughters are scaled. Subsequently, the energy of the mother particle is updated as well. Parameters • inputListNames (list(str)) – input particle list names • tableName – stored in localdb and created using ParticleWeightingLookUpCreator • path (basf2.Path) – module is added to this path modularAnalysis.cutAndCopyList(outputListName, inputListName, cut, writeOut=False, path=None)[source] Copy candidates from inputListName to outputListName if they pass cut (given selection criteria). Note Note the Particles themselves are not copied. The original and copied ParticleLists will point to the same Particles. Example require energetic pions safely inside the cdc cutAndCopyList("pi+:energeticPions", "pi+:loose", "[E > 2] and thetaInCDCAcceptance", path=mypath) Warning You must use square braces [ and ] for conditional statements. Parameters • outputListName (str) – the new ParticleList name • inputListName (str) – input ParticleList name • cut (str) – Candidates that do not pass these selection criteria are removed from the ParticleList • writeOut (bool) – whether RootOutput module should save the created ParticleList • path (basf2.Path) – modules are added to this path modularAnalysis.cutAndCopyLists(outputListName, inputListNames, cut, writeOut=False, path=None)[source] Copy candidates from all lists in inputListNames to outputListName if they pass cut (given selection criteria). Note Note that the Particles themselves are not copied. The original and copied ParticleLists will point to the same Particles. Example Require energetic pions safely inside the cdc cutAndCopyLists("pi+:energeticPions", ["pi+:good", "pi+:loose"], "[E > 2] and thetaInCDCAcceptance", path=mypath) Warning You must use square braces [ and ] for conditional statements. Parameters • outputListName (str) – the new ParticleList name • inputListName (list(str)) – list of input ParticleList names • cut (str) – Candidates that do not pass these selection criteria are removed from the ParticleList • writeOut (bool) – whether RootOutput module should save the created ParticleList • path (basf2.Path) – modules are added to this path This function is used to apply particle list specific cuts on one or more ROE masks (track or eclCluster). With this function one can DISCARD the tracks/eclclusters used in particles from provided particle list. This function should be executed only in the for_each roe path for the current ROE object. To avoid unnecessary computation, the input particle list should only contain particles from ROE (use cut ‘isInRestOfEvent == 1’). To update the ECLCluster masks, the input particle list should be a photon particle list (e.g. ‘gamma:someLabel’). To update the Track masks, the input particle list should be a charged pion particle list (e.g. ‘pi+:someLabel’). Updating a non-existing mask will create a new one. • discard tracks that were used in provided particle list discardFromROEMasks('pi+:badTracks', 'mask', '', path=mypath) • discard clusters that were used in provided particle list and pass a cut, apply to several masks discardFromROEMasks('gamma:badClusters', ['mask1', 'mask2'], 'E < 0.1', path=mypath) Parameters • list_name – name of the input ParticleList • cut_string – decay string with which the mask will be updated • path – modules are added to this path modularAnalysis.fillConvertedPhotonsList(decayString, cut, writeOut=False, path=None)[source] Creates photon Particle object for each e+e- combination in the V0 StoreArray. Note You must specify the daughter ordering. fillConvertedPhotonsList('gamma:converted -> e+ e-', '', path=mypath) Parameters • decayString (str) – Must be gamma to an e+e- pair. You must specify the daughter ordering. Will also determine the name of the particleList. • cut (str) – Particles need to pass these selection criteria to be added to the ParticleList • writeOut (bool) – whether RootOutput module should save the created ParticleList • path (basf2.Path) – modules are added to this path Creates Particles of the desired type from the corresponding mdst dataobjects, loads them to the StoreArray<Particle> and fills the ParticleList. the Standard Particles functions. The type of the particles to be loaded is specified via the decayString module parameter. The type of the mdst dataobject that is used as an input is determined from the type of the particle. The following types of the particles can be loaded: • charged final state particles (input mdst type = Tracks) • e+, mu+, pi+, K+, p, deuteron (and charge conjugated particles) • neutral final state particles • “gamma” (input mdst type = ECLCluster) • “K_S0”, “Lambda0” (input mdst type = V0) • “K_L0” (input mdst type = KLMCluster or ECLCluster) Note For “K_S0” and “Lambda0” you must specify the daughter ordering. For example, to load V0s as $$\Lambda^0\to p^+\pi^-$$ decays from V0s: fillParticleList('Lambda0 -> p+ pi-', '0.9 < M < 1.3', path=mypath) Tip Gammas can also be loaded from KLMClusters by explicitly setting the parameter loadPhotonsFromKLM to True. However, this should only be done in selected use-cases and the effect should be studied carefully. Tip For “K_L0” it is now possible to load from ECLClusters, to revert to the old (Belle) behavior, you can require 'isFromKLM > 0'. fillParticleList('K_L0', 'isFromKLM > 0', path=mypath) Parameters • decayString (str) – Type of Particle and determines the name of the ParticleList. If the input MDST type is V0 the whole decay chain needs to be specified, so that the user decides and controls the daughters’ order (e.g. K_S0 -> pi+ pi-) • cut (str) – Particles need to pass these selection criteria to be added to the ParticleList • writeOut (bool) – whether RootOutput module should save the created ParticleList • path (basf2.Path) – modules are added to this path • enforceFitHypothesis (bool) – If true, Particles will be created only for the tracks which have been fitted using a mass hypothesis of the exact type passed to fillParticleLists(). If enforceFitHypothesis is False (the default) the next closest fit hypothesis in terms of mass difference will be used if the fit using exact particle type is not available. • loadPhotonsFromKLM (bool) – If true, photon candidates will be created from KLMClusters as well. • loadPhotonBeamBackgroundMVA (bool) – If true, photon candidates will be assigned a beam background probability. modularAnalysis.fillParticleListFromMC(decayString, cut, addDaughters=False, skipNonPrimaryDaughters=False, writeOut=False, path=None)[source] Creates Particle object for each MCParticle of the desired type found in the StoreArray<MCParticle>, loads them to the StoreArray<Particle> and fills the ParticleList. The type of the particles to be loaded is specified via the decayString module parameter. Parameters • decayString – specifies type of Particles and determines the name of the ParticleList • cut – Particles need to pass these selection criteria to be added to the ParticleList • addDaughters – adds the bottom part of the decay chain of the particle to the datastore and sets mother-daughter relations • skipNonPrimaryDaughters – if true, skip non primary daughters, useful to study final state daughter particles • writeOut – whether RootOutput module should save the created ParticleList • path – modules are added to this path modularAnalysis.fillParticleListFromROE(decayString, cut, maskName='', sourceParticleListName='', useMissing=False, writeOut=False, path=None)[source] Creates Particle object for each ROE of the desired type found in the StoreArray<RestOfEvent>, loads them to the StoreArray<Particle> and fills the ParticleList. If useMissing is True, then the missing momentum is used instead of ROE. The type of the particles to be loaded is specified via the decayString module parameter. Parameters • decayString – specifies type of Particles and determines the name of the ParticleList. Source ROEs can be taken as a daughter list, for example: ‘B0:tagFromROE -> B0:signal’ • cut – Particles need to pass these selection criteria to be added to the ParticleList • sourceParticleListName – Use related ROEs to this particle list as a source • useMissing – Use missing momentum instead of ROE momentum • writeOut – whether RootOutput module should save the created ParticleList • path – modules are added to this path modularAnalysis.fillParticleListWithTrackHypothesis(decayString, cut, hypothesis, writeOut=False, enforceFitHypothesis=False, path=None)[source] As fillParticleList, but if used for a charged FSP, loads the particle with the requested hypothesis if available Parameters • decayString – specifies type of Particles and determines the name of the ParticleList • cut – Particles need to pass these selection criteria to be added to the ParticleList • hypothesis – the PDG code of the desired track hypothesis • writeOut – whether RootOutput module should save the created ParticleList • enforceFitHypothesis – If true, Particles will be created only for the tracks which have been fitted using a mass hypothesis of the exact type passed to fillParticleLists(). If enforceFitHypothesis is False (the default) the next closest fit hypothesis in terms of mass difference will be used if the fit using exact particle type is not available. • path – modules are added to this path Creates Particles of the desired types from the corresponding mdst dataobjects, loads them to the StoreArray<Particle> and fills the ParticleLists. The multiple ParticleLists with their own selection criteria are specified via list tuples (decayString, cut), for example kaons = ('K+:mykaons', 'kaonID>0.1') pions = ('pi+:mypions','pionID>0.1') fillParticleLists([kaons, pions], path=mypath) If you are unsure what selection you want, you might like to see the Standard Particles functions. The type of the particles to be loaded is specified via the decayString module parameter. The type of the mdst dataobject that is used as an input is determined from the type of the particle. The following types of the particles can be loaded: • charged final state particles (input mdst type = Tracks) • e+, mu+, pi+, K+, p, deuteron (and charge conjugated particles) • neutral final state particles • “gamma” (input mdst type = ECLCluster) • “K_S0”, “Lambda0” (input mdst type = V0) • “K_L0” (input mdst type = KLMCluster or ECLCluster) Note For “K_S0” and “Lambda0” you must specify the daughter ordering. For example, to load V0s as $$\Lambda^0\to p^+\pi^-$$ decays from V0s: v0lambdas = ('Lambda0 -> p+ pi-', '0.9 < M < 1.3') fillParticleLists([kaons, pions, v0lambdas], path=mypath) Tip Gammas can also be loaded from KLMClusters by explicitly setting the parameter loadPhotonsFromKLM to True. However, this should only be done in selected use-cases and the effect should be studied carefully. Tip For “K_L0” it is now possible to load from ECLClusters, to revert to the old (Belle) behavior, you can require 'isFromKLM > 0'. klongs = ('K_L0', 'isFromKLM > 0') fillParticleLists([kaons, pions, klongs], path=mypath) Parameters • decayStringsWithCuts (list) – A list of python ntuples of (decayString, cut). The decay string determines the type of Particle and the name of the ParticleList. If the input MDST type is V0 the whole decay chain needs to be specified, so that the user decides and controls the daughters ‘ order (e.g. K_S0 -> pi+ pi-) The cut is the selection criteria to be added to the ParticleList. It can be an empty string. • writeOut (bool) – whether RootOutput module should save the created ParticleList • path (basf2.Path) – modules are added to this path • enforceFitHypothesis (bool) – If true, Particles will be created only for the tracks which have been fitted using a mass hypothesis of the exact type passed to fillParticleLists(). If enforceFitHypothesis is False (the default) the next closest fit hypothesis in terms of mass difference will be used if the fit using exact particle type is not available. • loadPhotonsFromKLM (bool) – If true, photon candidates will be created from KLMClusters as well. • loadPhotonBeamBackgroundMVA (bool) – If true, photon candidates will be assigned a beam background probability. Creates Particle object for each MCParticle of the desired type found in the StoreArray<MCParticle>, loads them to the StoreArray<Particle> and fills the ParticleLists. The types of the particles to be loaded are specified via the (decayString, cut) tuples given in a list. For example: kaons = ('K+:gen', '') pions = ('pi+:gen', 'pionID>0.1') fillParticleListsFromMC([kaons, pions], path=mypath) Parameters • decayString – specifies type of Particles and determines the name of the ParticleList • cut – Particles need to pass these selection criteria to be added to the ParticleList • addDaughters – adds the bottom part of the decay chain of the particle to the datastore and sets mother-daughter relations • skipNonPrimaryDaughters – if true, skip non primary daughters, useful to study final state daughter particles • writeOut – whether RootOutput module should save the created ParticleList • path – modules are added to this path modularAnalysis.fillSignalSideParticleList(outputListName, decayString, path)[source] This function should only be used in the ROE path, that is a path that is executed for each ROE object in the DataStore. Example: fillSignalSideParticleList(‘gamma:sig’,’B0 -> K*0 ^gamma’, roe_path) Function will create a ParticleList with name ‘gamma:sig’ which will be filled with the existing photon Particle, being the second daughter of the B0 candidate to which the ROE object has to be related. Parameters • ouputListName – name of the created ParticleList • decayString – specify Particle to be added to the ParticleList modularAnalysis.findMCDecay(list_name, decay, writeOut=False, path=None)[source] Warning This function is not fully tested and maintained. Please consider to use reconstructMCDecay() instead. Finds and creates a ParticleList for all MCParticle decays matching a given DecayString. The decay string is required to describe correctly what you want. In the case of inclusive decays, you can use Grammar for custom MCMatching Parameters • list_name – The output particle list name • decay – The decay string which you want • writeOut – Whether RootOutput module should save the created outputList • path – modules are added to this path modularAnalysis.getAnalysisGlobaltag(timeout=180)str[source] Returns a string containing the name of the latest and recommended analysis globaltag. Parameters timeout – Seconds to wait for b2conditionsdb-recommend modularAnalysis.getBeamBackgroundProbabilityMVA(particleList, path=None)[source] Assign a probability to each ECL cluster as being background like (0) or signal like (1) Parameters • particleList – The input ParticleList, must be a photon list • path – modules are added to this path modularAnalysis.getNbarIDMVA(particleList, path=None)[source] This function can give a score to predict if it is a anti-n0. It is not used to predict n0. Currently, this can be used only for ECL cluster. output will be stored in extraInfo(nbarID); -1 means MVA invalid Parameters • particleList – The input ParticleList • path – modules are added to this path modularAnalysis.inclusiveBtagReconstruction(upsilon_list_name, bsig_list_name, btag_list_name, input_lists_names, path)[source] Reconstructs Btag from particles in given ParticleLists which do not share any final state particles (mdstSource) with Bsig. Parameters • upsilon_list_name – Name of the ParticleList to be filled with ‘Upsilon(4S) -> B:sig anti-B:tag’ • bsig_list_name – Name of the Bsig ParticleList • btag_list_name – Name of the Bsig ParticleList • input_lists_names – List of names of the ParticleLists which are used to reconstruct Btag from modularAnalysis.inputMdst(environmentType, filename, path, skipNEvents=0, entrySequence=None, *, parentLevel=0, **kwargs)[source] Loads the specified ROOT (DST/mDST/muDST) file with the RootInput module. The correct environment (e.g. magnetic field settings) are determined from the specified environment type. For the possible values please see inputMdstList() Parameters • environmentType (str) – type of the environment to be loaded • filename (str) – the name of the file to be loaded • path (basf2.Path) – modules are added to this path • skipNEvents (int) – N events of the input file are skipped • entrySequence (str) – The number sequences (e.g. 23:42,101) defining the entries which are processed. • parentLevel (int) – Number of generations of parent files (files used as input when creating a file) to be read modularAnalysis.inputMdstList(environmentType, filelist, path, skipNEvents=0, entrySequences=None, *, parentLevel=0, useB2BIIDBCache=True)[source] Loads the specified ROOT (DST/mDST/muDST) files with the RootInput module. The correct environment (e.g. magnetic field settings) are determined from the specified environment type. The currently available environments are: • ‘MC5’: for analysis of Belle II MC samples produced with releases prior to build-2016-05-01. This environment sets the constant magnetic field (B = 1.5 T) • ‘MC6’: for analysis of Belle II MC samples produced with build-2016-05-01 or newer but prior to release-00-08-00 • ‘MC7’: for analysis of Belle II MC samples produced with build-2016-05-01 or newer but prior to release-00-08-00 • ‘MC8’, for analysis of Belle II MC samples produced with release-00-08-00 or newer but prior to release-02-00-00 • ‘MC9’, for analysis of Belle II MC samples produced with release-00-08-00 or newer but prior to release-02-00-00 • ‘MC10’, for analysis of Belle II MC samples produced with release-00-08-00 or newer but prior to release-02-00-00 • ‘default’: for analysis of Belle II MC samples produced with releases with release-02-00-00 or newer. This environment sets the default magnetic field (see geometry settings) • ‘Belle’: for analysis of converted (or during of conversion of) Belle MC/DATA samples • ‘None’: for analysis of generator level information or during simulation/reconstruction of previously generated events Note that there is no difference between MC6 and MC7. Both are given for sake of completion. The same is true for MC8, MC9 and MC10 Parameters • environmentType (str) – type of the environment to be loaded • filelist (list(str)) – the filename list of files to be loaded • path (basf2.Path) – modules are added to this path • skipNEvents (int) – N events of the input files are skipped • entrySequences (list(str)) – The number sequences (e.g. 23:42,101) defining the entries which are processed for each inputFileName. • parentLevel (int) – Number of generations of parent files (files used as input when creating a file) to be read • useB2BIIDBCache (bool) – Loading of local KEKCC database (only to be deactivated in very special cases) This function is used to apply particle list specific cuts on one or more ROE masks (track or eclCluster). With this function one can KEEP the tracks/eclclusters used in particles from provided particle list. This function should be executed only in the for_each roe path for the current ROE object. To avoid unnecessary computation, the input particle list should only contain particles from ROE (use cut ‘isInRestOfEvent == 1’). To update the ECLCluster masks, the input particle list should be a photon particle list (e.g. ‘gamma:someLabel’). To update the Track masks, the input particle list should be a charged pion particle list (e.g. ‘pi+:someLabel’). Updating a non-existing mask will create a new one. • keep only those tracks that were used in provided particle list keepInROEMasks('pi+:goodTracks', 'mask', '', path=mypath) • keep only those clusters that were used in provided particle list and pass a cut, apply to several masks keepInROEMasks('gamma:goodClusters', ['mask1', 'mask2'], 'E > 0.1', path=mypath) Parameters • list_name – name of the input ParticleList • cut_string – decay string with which the mask will be updated • path – modules are added to this path modularAnalysis.labelTauPairMC(printDecayInfo=False, path=None, TauolaBelle=False, mapping_minus=None, mapping_plus=None)[source] Search tau leptons into the MC information of the event. If confirms it’s a generated tau pair decay, labels the decay generated of the positive and negative leptons using the ID of KKMC tau decay table. Parameters • printDecayInfo – If true, prints ID and prong of each tau lepton in the event. • path – module is added to this path • TauolaBelle – if False, TauDecayMarker is set. If True, TauDecayMode is set. • mapping_minus – if None, the map is the default one, else the path for the map is given by the user for tau- • mapping_plus – if None, the map is the default one, else the path for the map is given by the user for tau+ Loads Gearbox module to the path. Warning Should be used in a job with cosmic event generation only Needed for scripts which only generate cosmic events in order to load the geometry. Parameters • path – modules are added to this path • silence_warning – stops a verbose warning message if you know you want to use this function modularAnalysis.looseMCTruth(list_name, path)[source] Performs loose MC matching for all particles in the specified ParticleList. The difference between loose and normal mc matching algorithm is that the loose algorithm will find the common mother of the majority of daughter particles while the normal algorithm finds the common mother of all daughters. The results of loose mc matching algorithm are stored to the following extraInfo items: • looseMCMotherPDG: PDG code of most common mother • looseMCMotherIndex: 1-based StoreArray<MCParticle> index of most common mother • looseMCWrongDaughterN: number of daughters that don’t originate from the most common mother • looseMCWrongDaughterPDG: PDG code of the daughter that doesn’t originate from the most common mother (only if looseMCWrongDaughterN = 1) • looseMCWrongDaughterBiB: 1 if the wrong daughter is Beam Induced Background Particle Parameters • list_name – name of the input ParticleList • path – modules are added to this path modularAnalysis.markDuplicate(particleList, prioritiseV0, path)[source] Call DuplicateVertexMarker to find duplicate particles in a list and flag the ones that should be kept Parameters • particleList – input particle list • prioritiseV0 – if true, give V0s a higher priority modularAnalysis.matchMCTruth(list_name, path)[source] Performs MC matching (sets relation Particle->MCParticle) for all particles (and its (grand)^N-daughter particles) in the specified ParticleList. Parameters • list_name – name of the input ParticleList • path – modules are added to this path modularAnalysis.mergeListsWithBestDuplicate(outputListName, inputListNames, variable, preferLowest=True, writeOut=False, path=None)[source] Merge input ParticleLists into one output ParticleList. Only the best among duplicates is kept. The lowest or highest value (configurable via preferLowest) of the provided variable determines which duplicate is the best. Parameters • ouputListName – name of merged ParticleList • inputListName – vector of original ParticleLists to be merged • variable – variable to determine best duplicate • preferLowest – whether lowest or highest value of variable should be preferred • writeOut – whether RootOutput module should save the created ParticleList • path – modules are added to this path Give pi0/eta probability for hard photon. In the default weight files a value of 1.4 GeV is set as the lower limit for the hard photon energy in the CMS frame. The current default weight files are optimised using MC9. The input variables are as below. Aliases are set to some variables during training. • M: pi0/eta candidates Invariant mass • lowE: soft photon energy in lab frame • cTheta: soft photon ECL cluster’s polar angle • Zmva: soft photon output of MVA using Zernike moments of the cluster • minC2Hdist: soft photon distance from eclCluster to nearest point on nearest Helix at the ECL cylindrical radius Note Please don’t use following ParticleList names elsewhere: gamma:HARDPHOTON, pi0:PI0VETO, eta:ETAVETO, gamma:PI0SOFT + str(PI0ETAVETO_COUNTER), gamma:ETASOFT + str(PI0ETAVETO_COUNTER) Please don’t use lowE, cTheta, Zmva, minC2Hdist as alias elsewhere. Parameters • particleList – The input ParticleList • decayString – specify Particle to be added to the ParticleList • workingDirectory – The weight file directory • pi0vetoname – extraInfo name of pi0 probability • etavetoname – extraInfo name of eta probability • selection – Selection criteria that Particle needs meet in order for for_each ROE path to continue • path – modules are added to this path This function is used to apply particle list specific cuts on one or more ROE masks for Tracks. It is possible to optimize the ROE selection by treating tracks from V0’s separately, meaning, taking V0’s 4-momentum into account instead of 4-momenta of tracks. A cut for only specific V0’s passing it can be applied. The input particle list should be a V0 particle list: K_S0 (‘K_S0:someLabel’, ‘’), Lambda (‘Lambda:someLabel’, ‘’) or converted photons (‘gamma:someLabel’). Updating a non-existing mask will create a new one. • treat tracks from K_S0 inside mass window separately, replace track momenta with K_S0 momentum optimizeROEWithV0('K_S0:opt', 'mask', '0.450 < M < 0.550', path=mypath) Parameters • list_name – name of the input ParticleList • cut_string – decay string with which the mask will be updated • path – modules are added to this path modularAnalysis.outputIndex(filename, path, includeArrays=None, keepParents=False, mc=True)[source] Write out all particle lists as an index file to be reprocessed using parentLevel flag. Additional branches necessary for file to be read are automatically included. Additional Store Arrays and Relations to be stored can be specified via includeArrays list argument. Parameters • str – filename the name of the output index file • str – path modules are added to this path • list(str) – includeArrays: datastore arrays/objects to write to the output file in addition to particle lists and related information • bool – keepParents whether the parents of the input event will be saved as the parents of the same event in the output index file. Useful if you are only adding more information to another index file • bool – mc whether the input data is MC or not modularAnalysis.outputMdst(filename, path)[source] Saves mDST (mini-Data Summary Tables) to the output root file. Warning This function is kept for backward-compatibility. Better to use mdst.add_mdst_output directly. Save uDST (user-defined Data Summary Tables) = MDST + Particles + ParticleLists The charge-conjugate lists of those given in particleLists are also stored. Additional Store Arrays and Relations to be stored can be specified via includeArrays list argument. Note This does not reduce the amount of Particle objects saved, see udst.add_skimmed_udst_output for a function that does. modularAnalysis.printDataStore(eventNumber=- 1, path=None)[source] Prints the contents of DataStore in the first event (or a specific event number or all events). Will list all objects and arrays (including size). The command line tool: b2file-size. Parameters • eventNumber (int) – Print the datastore only for this event. The default (-1) prints only the first event, 0 means print for all events (can produce large output) • path (basf2.Path) – the PrintCollections module is added to this path Warning This will print a lot of output if you print it for all events and process many events. modularAnalysis.printList(list_name, full, path)[source] Prints the size and executes Particle->print() (if full=True) method for all Particles in given ParticleList. For debugging purposes. Parameters • list_name – input ParticleList name • full – execute Particle->print() method for all Particles • path – modules are added to this path modularAnalysis.printMCParticles(onlyPrimaries=False, maxLevel=- 1, path=None, *, showProperties=False, showMomenta=False, showVertices=False, showStatus=False)[source] Prints all MCParticles or just primary MCParticles up to specified level. -1 means no limit. By default this will print a tree of just the particle names and their pdg codes in the event, for example [INFO] Content of MCParticle list ╰── Upsilon(4S) (300553) ├── B+ (521) │ ├── anti-D_0*0 (-10421) │ │ ├── D- (-411) │ │ │ ├── K*- (-323) │ │ │ │ ├── anti-K0 (-311) │ │ │ │ │ ╰── K_S0 (310) │ │ │ │ │ ├── pi+ (211) │ │ │ │ │ │ ╰╶╶ p+ (2212) │ │ │ │ │ ╰── pi- (-211) │ │ │ │ │ ├╶╶ e- (11) │ │ │ │ │ ├╶╶ n0 (2112) │ │ │ │ │ ├╶╶ n0 (2112) │ │ │ │ │ ╰╶╶ n0 (2112) │ │ │ │ ╰── pi- (-211) │ │ │ │ ├╶╶ anti-nu_mu (-14) │ │ │ │ ╰╶╶ mu- (13) │ │ │ │ ├╶╶ nu_mu (14) │ │ │ │ ├╶╶ anti-nu_e (-12) │ │ │ │ ╰╶╶ e- (11) │ │ │ ╰── K_S0 (310) │ │ │ ├── pi0 (111) │ │ │ │ ├── gamma (22) │ │ │ │ ╰── gamma (22) │ │ │ ╰── pi0 (111) │ │ │ ├── gamma (22) │ │ │ ╰── gamma (22) │ │ ╰── pi+ (211) │ ├── mu+ (-13) │ │ ├╶╶ anti-nu_mu (-14) │ │ ├╶╶ nu_e (12) │ │ ╰╶╶ e+ (-11) │ ├── nu_mu (14) │ ╰── gamma (22) ... There’s a distinction between primary and secondary particles. Primary particles are the ones created by the physics generator while secondary particles are ones generated by the simulation of the detector interaction. Secondaries are indicated with a dashed line leading to the particle name and if the output is to the terminal they will be printed in red. If onlyPrimaries is True they will not be included in the tree. On demand, extra information on all the particles can be displayed by enabling any of the showProperties, showMomenta, showVertices and showStatus flags. Enabling all of them will look like this: ... ╰── pi- (-211) │ p=(0.257, -0.335, 0.0238) |p|=0.423 │ production vertex=(0.113, -0.0531, 0.0156), time=0.00589 │ status flags=PrimaryParticle, StableInGenerator, StoppedInDetector │ list index=48 │ ╰╶╶ n0 (2112) p=(-0.000238, -0.0127, 0.0116) |p|=0.0172 production vertex=(144, 21.9, -1.29), time=39 status flags=StoppedInDetector list index=66 The first line of extra information is enabled by showProperties, the second line by showMomenta, the third line by showVertices and the last two lines by showStatus. Note that all values are given in Belle II standard units, that is GeV, centimeter and nanoseconds. The depth of the tree can be limited with the maxLevel argument: If it’s bigger than zero it will limit the tree to the given number of generations. A visual indicator will be added after each particle which would have additional daughters that are skipped due to this limit. An example event with maxLevel=3 is given below. In this case only the tau neutrino and the pion don’t have additional daughters. [INFO] Content of MCParticle list ╰── Upsilon(4S) (300553) ├── B+ (521) │ ├── anti-D*0 (-423) → … │ ├── tau+ (-15) → … │ ╰── nu_tau (16) ╰── B- (-521) ├── D*0 (423) → … ├── K*- (-323) → … ├── K*+ (323) → … ╰── pi- (-211) Parameters • onlyPrimaries (bool) – If True show only primary particles, that is particles coming from the generator and not created by the simulation. • maxLevel (int) – If 0 or less print the whole tree, otherwise stop after n generations • showProperties (bool) – If True show mass, energy and charge of the particles • showMomenta (bool) – if True show the momenta of the particles • showVertices (bool) – if True show production vertex and production time of all particles • showStatus (bool) – if True show some status information on the particles. For secondary particles this includes creation process. modularAnalysis.printPrimaryMCParticles(path, **kwargs)[source] Prints all primary MCParticles, that is particles from the physics generator and not particles created by the simulation This is equivalent to printMCParticles(onlyPrimaries=True, path=path) and additional keyword arguments are just forwarded to that function This function prints out the information for the current ROE, so it should only be used in the for_each path. It prints out basic ROE object info. If mask names are provided, specific information for those masks will be printed out. It is also possible to print out all particles in a given mask if the ‘full_print’ is set to True. Parameters • unpackComposites – if true, replace composite particles by their daughters • full_print – print out particles in mask • path – modules are added to this path modularAnalysis.printVariableValues(list_name, var_names, path)[source] Prints out values of specified variables of all Particles included in given ParticleList. For debugging purposes. Parameters • list_name – input ParticleList name • var_names – vector of variable names to be printed • path – modules are added to this path modularAnalysis.rankByHighest(particleList, variable, numBest=0, outputVariable='', allowMultiRank=False, cut='', path=None)[source] Ranks particles in the input list by the given variable (highest to lowest), and stores an integer rank for each Particle in an extraInfo field ${variable}_rank starting at 1 (best). The list is also sorted from best to worst candidate (each charge, e.g. B+/B-, separately). This can be used to perform a best candidate selection by cutting on the corresponding rank value, or by specifying a non-zero value for ‘numBest’. Tip Extra-info fields can be accessed by the extraInfo metavariable. These variable names can become clunky, so it’s probably a good idea to set an alias. For example if you rank your B candidates by momentum, rankByHighest("B0:myCandidates", "p", path=mypath) vm.addAlias("momentumRank", "extraInfo(p_rank)") Parameters • particleList – The input ParticleList • variable – Variable to order Particles by. • numBest – If not zero, only the$numBest Particles in particleList with rank <= numBest are kept. • outputVariable – Name for the variable that will be created which contains the rank, Default is ‘${variable}_rank’. • allowMultiRank – If true, candidates with the same value will get the same rank. • cut – Only candidates passing the cut will be ranked. The others will have rank -1 • path – modules are added to this path modularAnalysis.rankByLowest(particleList, variable, numBest=0, outputVariable='', allowMultiRank=False, cut='', path=None)[source] Ranks particles in the input list by the given variable (lowest to highest), and stores an integer rank for each Particle in an extraInfo field ${variable}_rank starting at 1 (best). The list is also sorted from best to worst candidate (each charge, e.g. B+/B-, separately). This can be used to perform a best candidate selection by cutting on the corresponding rank value, or by specifying a non-zero value for ‘numBest’. Tip Extra-info fields can be accessed by the extraInfo metavariable. These variable names can become clunky, so it’s probably a good idea to set an alias. For example if you rank your B candidates by dM, rankByLowest("B0:myCandidates", "dM", path=mypath) Parameters • particleList – The input ParticleList • variable – Variable to order Particles by. • numBest – If not zero, only the $numBest Particles in particleList with rank <= numBest are kept. • outputVariable – Name for the variable that will be created which contains the rank, Default is ‘${variable}_rank’. • allowMultiRank – If true, candidates with the same value will get the same rank. • cut – Only candidates passing the cut will be ranked. The others will have rank -1 • path – modules are added to this path modularAnalysis.reconstructDecay(decayString, cut, dmID=0, writeOut=False, path=None, candidate_limit=None, ignoreIfTooManyCandidates=True, chargeConjugation=True, allowChargeViolation=False)[source] Creates new Particles by making combinations of existing Particles - it reconstructs unstable particles via their specified decay mode, e.g. in form of a DecayString: D0 -> K- pi+ or B+ -> anti-D0 pi+, … All possible combinations are created (particles are used only once per candidate) and combinations that pass the specified selection criteria are saved to a newly created (mother) ParticleList. By default the charge conjugated decay is reconstructed as well (meaning that the charge conjugated mother list is created as well) but this can be deactivated. One can use an @-sign to mark a particle as unspecified for inclusive analyses, e.g. in a DecayString: '@Xsd -> K+ pi-'. Warning The input ParticleLists are typically ordered according to the upstream reconstruction algorithm. Therefore, if you combine two or more identical particles in the decay chain you should not expect to see the same distribution for the daughter kinematics as they may be sorted by geometry, momentum etc. For example, in the decay D0 -> pi0 pi0 the momentum distributions of the two pi0 s are not identical. This can be solved by manually randomising the lists before combining. Parameters • decayStringDecayString specifying what kind of the decay should be reconstructed (from the DecayString the mother and daughter ParticleLists are determined) • cut – created (mother) Particles are added to the mother ParticleList if they pass give cuts (in VariableManager style) and rejected otherwise • dmID – user specified decay mode identifier • writeOut – whether RootOutput module should save the created ParticleList • path – modules are added to this path • candidate_limit – Maximum amount of candidates to be reconstructed. If the number of candidates is exceeded a Warning will be printed. By default, all these candidates will be removed and event will be ignored. This behaviour can be changed by 'ignoreIfTooManyCandidates' flag. If no value is given the amount is limited to a sensible default. A value <=0 will disable this limit and can cause huge memory amounts so be careful. • ignoreIfTooManyCandidates – whether event should be ignored or not if number of reconstructed candidates reaches limit. If event is ignored, no candidates are reconstructed, otherwise, number of candidates in candidate_limit is reconstructed. • chargeConjugation – boolean to decide whether charge conjugated mode should be reconstructed as well (on by default) • allowChargeViolation – whether the decay string needs to conserve the electric charge modularAnalysis.reconstructMCDecay(decayString, cut, dmID=0, writeOut=False, path=None, chargeConjugation=True)[source] Finds and creates a ParticleList from given decay string. ParticleList of daughters with sub-decay is created. Only signal particle, which means isSignal is equal to 1, is stored. One can use the decay string grammar to change the behavior of isSignal. One can find detailed information in DecayString. Tip If one uses same sub-decay twice, same particles are registered to a ParticleList. For example, K_S0:pi0pi0 =direct=> [pi0:gg =direct=> gamma:MC gamma:MC] [pi0:gg =direct=> gamma:MC gamma:MC]. One can skip the second sub-decay, K_S0:pi0pi0 =direct=> [pi0:gg =direct=> gamma:MC gamma:MC] pi0:gg. Parameters • decayStringDecayString specifying what kind of the decay should be reconstructed (from the DecayString the mother and daughter ParticleLists are determined) • cut – created (mother) Particles are added to the mother ParticleList if they pass given cuts (in VariableManager style) and rejected otherwise isSignal==1 is always required by default. • dmID – user specified decay mode identifier • writeOut – whether RootOutput module should save the created ParticleList • path – modules are added to this path • chargeConjugation – boolean to decide whether charge conjugated mode should be reconstructed as well (on by default) modularAnalysis.reconstructMissingKlongDecayExpert(decayString, cut, dmID=0, writeOut=False, path=None, recoList='_reco')[source] Creates a list of K_L0’s with their momentum determined from kinematic constraints of B->K_L0 + something else. Parameters • decayString – DecayString specifying what kind of the decay should be reconstructed (from the DecayString the mother and daughter ParticleLists are determined) • cut – Particles are added to the K_L0 ParticleList if they pass the given cuts (in VariableManager style) and rejected otherwise • dmID – user specified decay mode identifier • writeOut – whether RootOutput module should save the created ParticleList • path – modules are added to this path • recoList – suffix appended to original K_L0 ParticleList that identifies the newly created K_L0 list modularAnalysis.reconstructRecoil(decayString, cut, dmID=0, writeOut=False, path=None, candidate_limit=None, allowChargeViolation=False)[source] Creates new Particles that recoil against the input particles. For example the decay string M -> D1 D2 D3 will: • create mother Particle M for each unique combination of D1, D2, D3 Particles • Particles D1, D2, D3 will be appended as daughters to M • the 4-momentum of the mother Particle M is given by p(M) = p(HER) + p(LER) - Sum_i p(Di) Parameters • decayString – DecayString specifying what kind of the decay should be reconstructed (from the DecayString the mother and daughter ParticleLists are determined) • cut – created (mother) Particles are added to the mother ParticleList if they pass give cuts (in VariableManager style) and rejected otherwise • dmID – user specified decay mode identifier • writeOut – whether RootOutput module should save the created ParticleList • path – modules are added to this path • candidate_limit – Maximum amount of candidates to be reconstructed. If the number of candidates is exceeded no candidate will be reconstructed for that event and a Warning will be printed. If no value is given the amount is limited to a sensible default. A value <=0 will disable this limit and can cause huge memory amounts so be careful. • allowChargeViolation – whether the decay string needs to conserve the electric charge modularAnalysis.reconstructRecoilDaughter(decayString, cut, dmID=0, writeOut=False, path=None, candidate_limit=None, allowChargeViolation=False)[source] Creates new Particles that are daughters of the particle reconstructed in the recoil (always assumed to be the first daughter). For example the decay string M -> D1 D2 D3 will: • create mother Particle M for each unique combination of D1, D2, D3 Particles • Particles D1, D2, D3 will be appended as daughters to M • the 4-momentum of the mother Particle M is given by p(M) = p(D1) - Sum_i p(Di), where i>1 Parameters • decayString – DecayString specifying what kind of the decay should be reconstructed (from the DecayString the mother and daughter ParticleLists are determined) • cut – created (mother) Particles are added to the mother ParticleList if they pass give cuts (in VariableManager style) and rejected otherwise • dmID – user specified decay mode identifier • writeOut – whether RootOutput module should save the created ParticleList • path – modules are added to this path • candidate_limit – Maximum amount of candidates to be reconstructed. If the number of candidates is exceeded no candidate will be reconstructed for that event and a Warning will be printed. If no value is given the amount is limited to a sensible default. A value <=0 will disable this limit and can cause huge memory amounts so be careful. • allowChargeViolation – whether the decay string needs to conserve the electric charge taking into account that the first daughter is actually the mother modularAnalysis.removeExtraInfo(particleLists=None, removeEventExtraInfo=False, path=None)[source] Removes the ExtraInfo of the given particleLists. If specified (removeEventExtraInfo = True) also the EventExtraInfo is removed. modularAnalysis.removeParticlesNotInLists(lists_to_keep, path)[source] Removes all Particles that are not in a given list of ParticleLists (or daughters of those). All relations from/to Particles, daughter indices, and other ParticleLists are fixed. Parameters • lists_to_keep – Keep the Particles and their daughters in these ParticleLists. • path – modules are added to this path modularAnalysis.removeTracksForTrackingEfficiencyCalculation(inputListNames, fraction, path=None)[source] Randomly remove tracks from the provided particle lists to estimate the tracking efficiency. Takes care of the duplicates, if any. Parameters • inputListNames (list(str)) – input particle list names • fraction (float) – fraction of particles to be removed randomly • path (basf2.Path) – module is added to this path modularAnalysis.replaceMass(replacerName, particleLists=None, pdgCode=22, path=None)[source] replaces the mass of the particles inside the given particleLists with the invariant mass of the particle corresponding to the given pdgCode. Parameters • particleLists – new ParticleList filled with copied Particles • pdgCode – PDG code for mass reference • path – modules are added to this path modularAnalysis.scaleError(outputListName, inputListName, scaleFactors=[1.17, 1.12, 1.16, 1.15, 1.13], d0Resolution=[0.00122, 0.00141], z0Resolution=[0.00134, 0.00153], path=None)[source] This module creates a new charged particle list. The helix errors of the new particles are scaled by constant factors. These scale factors are defined for each helix parameter (d0, phi0, omega, z0, tanlambda). The impact parameter resolution can be defined in a pseudo-momentum dependent form, which limits the d0 and z0 errors so that they do not shrink below the resolution. This module is supposed to be used for low-momentum (0-3 GeV/c) tracks in BBbar events. Details will be documented in a Belle II note by the Belle II Japan ICPV group. Parameters • inputListName – Name of input charged particle list to be scaled • outputListName – Name of output charged particle list with scaled error • scaleFactors – List of five constants to be multiplied to each of helix errors • d0Resolution – List of two parameters, (a [cm], b [cm/(GeV/c)]), defining d0 resolution as sqrt{ a**2 + (b / (p*beta*sinTheta**1.5))**2 } • z0Resolution – List of two parameters, (a [cm], b [cm/(GeV/c)]), defining z0 resolution as sqrt{ a**2 + (b / (p*beta*sinTheta**2.5))**2 } modularAnalysis.scaleTrackMomenta(inputListNames, scale, path=None)[source] Scale momenta of the particles according to the scaling factor scale. If the particle list contains composite particles, the momenta of the track-based daughters are scaled. Subsequently, the momentum of the mother particle is updated as well. Parameters • inputListNames (list(str)) – input particle list names • scale (float) – scaling factor (1.0 – no scaling) • path (basf2.Path) – module is added to this path modularAnalysis.selectDaughters(particle_list_name, decay_string, path)[source] Redefine the Daughters of a particle: select from decayString Parameters • particle_list_name – input particle list • decay_string – for selecting the Daughters to be preserved modularAnalysis.setAnalysisConfigParams(configParametersAndValues, path)[source] Sets analysis configuration parameters. These are: • ‘tupleStyle’: ‘Default’ (default) or ‘Laconic’ o) defines the style of the branch name in the ntuple • ‘mcMatchingVersion’: Specifies what version of mc matching algorithm is going to be used: • ‘MC5’ - analysis of BelleII MC5 • ‘Belle’ - analysis of Belle MC • ‘BelleII’ (default) - all other cases Parameters • configParametersAndValues – dictionary of parameters and their values of the form {param1: value, param2: value, …) • modules – are added to this path modularAnalysis.setupEventInfo(noEvents, path)[source] Prepare to generate events. This function sets up the EventInfoSetter. You should call this before adding a generator from generators. The experiment and run numbers are set to 0 (run independent generic MC in phase 3). https://confluence.desy.de/display/BI/Experiment+numbering Parameters • noEvents (int) – number of events to be generated • path (basf2.Path) – modules are added to this path modularAnalysis.signalRegion(particleList, cut, path=None, name='isSignalRegion', blind_data=True)[source] Define and blind a signal region. Per default, the defined signal region is cut out if ran on data. This function will provide a new variable ‘isSignalRegion’ as default, which is either 0 or 1 depending on the cut provided. Example ma.reconstructDecay("B+:sig -> D+ pi0", "Mbc>5.2", path=path) ma.signalRegion("B+:sig", "Mbc>5.27 and abs(deltaE)<0.2", blind_data=True, path=path) ma.variablesToNtuples("B+:sig", ["isSignalRegion"], path=path) Parameters • particleList (str) – The input ParticleList • cut (str) – Cut string describing the signal region • path (basf2.Path) – • name (str) – Name of the Signal region in the variable manager • blind_data (bool) – Automatically exclude signal region from data Checks if the current ROE object in the for_each roe path (argument roe_path) is related to the particle from the input ParticleList. Additional selection criteria can be applied. If ROE is not related to any of the Particles from ParticleList or the Particle doesn’t meet the selection criteria the execution of deadEndPath is started. This path, as the name suggests should be empty and its purpose is to end the execution of for_each roe path for the current ROE object. Parameters • particleList – The input ParticleList • selection – Selection criteria that Particle needs meet in order for for_each ROE path to continue • for_each – roe path in which this filter is executed • deadEndPath – empty path that ends execution of or_each roe path for the current ROE object. Checks if the current ROE object in the for_each roe path (argument roe_path) is related to the particle from the input ParticleList. Additional selection criteria can be applied. If ROE is not related to any of the Particles from ParticleList or the Particle doesn’t meet the selection criteria the execution of deadEndPath is started. This path, as the name suggests should be empty and its purpose is to end the execution of for_each roe path for the current ROE object. Parameters • particleLists – The input ParticleLists • selection – Selection criteria that Particle needs meet in order for for_each ROE path to continue • for_each – roe path in which this filter is executed • deadEndPath – empty path that ends execution of or_each roe path for the current ROE object. modularAnalysis.summaryOfLists(particleLists, outputFile=None, path=None)[source] Prints out Particle statistics at the end of the job: number of events with at least one candidate, average number of candidates per event, etc. If an output file name is provided the statistics is also dumped into a json file with that name. Parameters • particleLists – list of input ParticleLists • outputFile – output file name (not created by default) modularAnalysis.tagCurlTracks(particleLists, mcTruth=False, responseCut=0.324, selectorType='cut', ptCut=0.6, train=False, path=None)[source] Warning The cut selector is not calibrated with Belle II data and should not be used without extensive study. Identifies curl tracks and tags them with extraInfo(isCurl=1) for later removal. For Belle data with a B2BII analysis the available cut based selection is described in BN1079. The module loops over all particles in a given list that meet the preselection ptCut and assigns them to bundles based on the response of the chosen selector and the required minimum response set by the responseCut. Once all particles are assigned they are ranked by 25dr^2+dz^2. All but the lowest are tagged with extraInfo(isCurl=1) to allow for later removal by cutting the list or removing these from ROE as applicable. Parameters • particleLists – list of particle lists to check for curls. • mcTruth – bool flag to additionally assign particles with extraInfo(isTruthCurl) and extraInfo(truthBundleSize). To calculate these particles are assigned to bundles by their genParticleIndex then ranked and tagged as normal. • responseCut – float min classifier response that considers two tracks to come from the same particle. Note ‘cut’ selector is binary 0/1. • selectorType – string name of selector to use. The available options are ‘cut’ and ‘mva’. It is strongly recommended to used the ‘mva’ selection. The ‘cut’ selection is based on BN1079 and is only calibrated for Belle data. • ptCut – pre-selection cut on transverse momentum. • train – flag to set training mode if selector has a training mode (mva). • path – module is added to this path. Update an existing ROE mask by applying additional selection cuts for tracks and/or clusters. See function appendROEMask! Parameters • list_name – name of the input ParticleList • trackSelection – decay string for the track-based particles in ROE • eclClusterSelection – decay string for the ECL-based particles in ROE • klmClusterSelection – decay string for the KLM-based particles in ROE • path – modules are added to this path Update existing ROE masks by applying additional selection cuts for tracks and/or clusters. The multiple ROE masks with their own selection criteria are specified via list tuples (mask_name, trackSelection, eclClusterSelection, klmClusterSelection) See function appendROEMasks! Parameters • list_name – name of the input ParticleList • path – modules are added to this path modularAnalysis.updateROEUsingV0Lists(target_particle_list, mask_names, default_cleanup=True, selection_cuts=None, apply_mass_fit=False, fitter='treefit', path=None)[source] This function creates V0 particle lists (photons, $$K^0_S$$ and $$\Lambda^0$$) and it uses V0 candidates to update the Rest Of Event, which is associated to the target particle list. It is possible to apply a standard or customized selection and mass fit to the V0 candidates. Parameters • target_particle_list – name of the input ParticleList • default_cleanup – if True, predefined cuts will be applied on the V0 lists • selection_cuts – a single string of selection cuts or tuple of three strings (photon_cuts, K_S0_cuts, Lambda0_cuts), which will be applied to the V0 lists. These cuts will have a priority over the default ones. • apply_mass_fit – if True, a mass fit will be applied to the V0 particles • fitter – string, that represent a fitter choice: “treefit” for TreeFitter and “kfit” for KFit • path – modules are added to this path modularAnalysis.variableToSignalSideExtraInfo(particleList, varToExtraInfo, path)[source] Write the value of specified variables estimated for the single particle in the input list (has to contain exactly 1 particle) as an extra info to the particle related to current ROE. Should be used only in the for_each roe path. Parameters • particleList – The input ParticleList • varToExtraInfo – Dictionary of Variables and extraInfo names. • path – modules are added to this path modularAnalysis.variablesToDaughterExtraInfo(particleList, decayString, variables, option=0, path=None)[source] For each daughter particle specified via decay string the selected variables (estimated for the mother particle) are saved in an extra-info field with the given name. In other words, the property of mother is saved as extra-info to specified daughter particle. An existing extra info with the same name will be overwritten if the new value is lower / will never be overwritten / will be overwritten if the new value is higher / will always be overwritten (-1/0/1/2). Parameters • particleList – The input ParticleList • decayString – Decay string that specifies to which daughter the extra info should be appended • variables – Dictionary of Variables and extraInfo names. • option – Various options for overwriting • path – modules are added to this path modularAnalysis.variablesToEventExtraInfo(particleList, variables, option=0, path=None)[source] For each particle in the input list the selected variables are saved in an event-extra-info field with the given name, Can be used to save MC truth information, for example, in a ntuple of reconstructed particles. An existing extra info with the same name will be overwritten if the new value is lower / will never be overwritten / will be overwritten if the new value is higher / will always be overwritten (-1/0/1/2). Parameters • particleList – The input ParticleList • variables – Dictionary of Variables and extraInfo names. • path – modules are added to this path modularAnalysis.variablesToExtraInfo(particleList, variables, option=0, path=None)[source] For each particle in the input list the selected variables are saved in an extra-info field with the given name. Can be used when wanting to save variables before modifying them, e.g. when performing vertex fits. An existing extra info with the same name will be overwritten if the new value is lower / will never be overwritten / will be overwritten if the new value is higher / will always be overwritten (-1/0/1/2). Parameters • particleList – The input ParticleList • variables – Dictionary of Variables and extraInfo names. • path – modules are added to this path modularAnalysis.variablesToHistogram(decayString, variables, variables_2d=None, filename='ntuple.root', path=None, *, directory=None, prefixDecayString=False)[source] Creates and fills a flat ntuple with the specified variables from the VariableManager Parameters • decayString (str) – specifies type of Particles and determines the name of the ParticleList • variables (list(tuple))) – variables + binning which must be registered in the VariableManager • variables_2d (list(tuple)) – pair of variables + binning for each which must be registered in the VariableManager • filename (str) – which is used to store the variables • path (basf2.Path) – the basf2 path where the analysis is processed • directory (str) – directory inside the output file where the histograms should be saved. Useful if you want to have different histograms in the same file to separate them. • prefixDecayString (bool) – If True the decayString will be prepended to the directory name to allow for more programmatic naming of the structure in the file. modularAnalysis.variablesToNtuple(decayString, variables, treename='variables', filename='ntuple.root', path=None)[source] Creates and fills a flat ntuple with the specified variables from the VariableManager. If a decayString is provided, then there will be one entry per candidate (for particle in list of candidates). If an empty decayString is provided, there will be one entry per event (useful for trigger studies, etc). Parameters • decayString (str) – specifies type of Particles and determines the name of the ParticleList • variables (list(str)) – the list of variables (which must be registered in the VariableManager) • treename (str) – name of the ntuple tree • filename (str) – which is used to store the variables • path (basf2.Path) – the basf2 path where the analysis is processed Give pi0/eta probability for hard photon. In the default weight files a value of 1.4 GeV is set as the lower limit for the hard photon energy in the CMS frame. The current default weight files are optimised using MC12. The input variables of the mva training are: • M: pi0/eta candidates Invariant mass • daughter(1,E): soft photon energy in lab frame • daughter(1,clusterTheta): soft photon ECL cluster’s polar angle • daughter(1,minC2TDist): soft photon distance from eclCluster to nearest point on nearest Helix at the ECL cylindrical radius • daughter(1,clusterZernikeMVA): soft photon output of MVA using Zernike moments of the cluster • daughter(1,clusterNHits): soft photon total crystal weights sum(w_i) with w_i<=1 • daughter(1,clusterE9E21): soft photon ratio of energies in inner 3x3 crystals and 5x5 crystals without corners • cosHelicityAngleMomentum: pi0/eta candidates cosHelicityAngleMomentum The following strings are available for mode: • standard: loose energy cut and no clusterNHits cut are applied to soft photon • tight: tight energy cut and no clusterNHits cut are applied to soft photon • cluster: loose energy cut and clusterNHits cut are applied to soft photon • both: tight energy cut and clusterNHits cut are applied to soft photon The final probability of the pi0/eta veto is stored as an extraInfo. If no suffix is set it can be obtained from the variables pi0Prob/etaProb. Otherwise, it is available as ‘{Pi0, Eta}ProbOrigin’, ‘{Pi0, Eta}ProbTightEnergyThreshold’, ‘{Pi0, Eta}ProbLargeClusterSize’, or ‘{Pi0, Eta}ProbTightEnergyThresholdAndLargeClusterSize’} for the four modes described above, with the chosen suffix appended. Note Please don’t use following ParticleList names elsewhere: gamma:HardPhoton, gamma:Pi0Soft + ListName + '_' + particleList.replace(':', '_'), gamma:EtaSoft + ListName + '_' + particleList.replace(':', '_'), pi0:EtaVeto + ListName, eta:EtaVeto + ListName Parameters • particleList – the input ParticleList • decayString – specify Particle to be added to the ParticleList • mode – choose one mode out of ‘standard’, ‘tight’, ‘cluster’ and ‘both’ • selection – selection criteria that Particle needs meet in order for for_each ROE path to continue • path – modules are added to this path • suffix – optional suffix to be appended to the usual extraInfo name • hardParticle – particle name which is used to calculate the pi0/eta probability (default is gamma) • pi0PayloadNameOverride – specify the payload name of pi0 veto only if one wants to use non-default one. (default is None) • pi0SoftPhotonCutOverride – specify the soft photon selection criteria of pi0 veto only if one wants to use non-default one. (default is None) • etaPayloadNameOverride – specify the payload name of eta veto only if one wants to use non-default one. (default is None) • etaSoftPhotonCutOverride – specify the soft photon selection criteria of eta veto only if one wants to use non-default one. (default is None)
proofpile-shard-0030-331
{ "provenance": "003.jsonl.gz:332" }
# Adjacency matrix (Redirected from Modified adjacency matrix) Jump to navigation Jump to search In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph. In the special case of a finite simple graph, the adjacency matrix is a (0,1)-matrix with zeros on its diagonal. If the graph is undirected, the adjacency matrix is symmetric. The relationship between a graph and the eigenvalues and eigenvectors of its adjacency matrix is studied in spectral graph theory. The adjacency matrix should be distinguished from the incidence matrix for a graph, a different matrix representation whose elements indicate whether vertex–edge pairs are incident or not, and degree matrix which contains information about the degree of each vertex. ## Definition For a simple graph with vertex set V, the adjacency matrix is a square |V| × |V| matrix A such that its element Aij is one when there is an edge from vertex i to vertex j, and zero when there is no edge.[1] The diagonal elements of the matrix are all zero, since edges from a vertex to itself (loops) are not allowed in simple graphs. It is also sometimes useful in algebraic graph theory to replace the nonzero elements with algebraic variables.[2] The same concept can be extended to multigraphs and graphs with loops by storing the number of edges between each two vertices in the corresponding matrix element, and by allowing nonzero diagonal elements. Loops may be counted either once (as a single edge) or twice (as two vertex-edge incidences), as long as a consistent convention is followed. Undirected graphs often use the latter convention of counting loops twice, whereas directed graphs typically use the former convention. ### Of a bipartite graph The adjacency matrix A of a bipartite graph whose two parts have r and s vertices can be written in the form ${\displaystyle A={\begin{pmatrix}0_{r,r}&B\\B^{T}&0_{s,s}\end{pmatrix}},}$ where B is an r × s matrix, and 0r,r and 0s,s represent the r × r and s × s zero matrices. In this case, the smaller matrix B uniquely represents the graph, and the remaining parts of A can be discarded as redundant. B is sometimes called the biadjacency matrix. Formally, let G = (U, V, E) be a bipartite graph with parts U = {u1, …, ur} and V = {v1, …, vs}. The biadjacency matrix is the r × s 0–1 matrix B in which bi,j = 1 if and only if (ui, vj)E. If G is a bipartite multigraph or weighted graph then the elements bi,j are taken to be the number of edges between the vertices or the weight of the edge (ui, vj), respectively. ### Variations An (a, b, c)-adjacency matrix A of a simple graph has Ai,j = a if (i, j) is an edge, b if it is not, and c on the diagonal. The Seidel adjacency matrix is a (−1, 1, 0)-adjacency matrix. This matrix is used in studying strongly regular graphs and two-graphs.[3] The distance matrix has in position (i, j) the distance between vertices vi and vj. The distance is the length of a shortest path connecting the vertices. Unless lengths of edges are explicitly provided, the length of a path is the number of edges in it. The distance matrix resembles a high power of the adjacency matrix, but instead of telling only whether or not two vertices are connected (i.e., the connection matrix, which contains boolean values), it gives the exact distance between them. ## Examples ### Undirected graphs The convention followed here (for undirected graphs) is that each edge adds 1 to the appropriate cell in the matrix, and each loop adds 2.[4] This allows the degree of a vertex to be easily found by taking the sum of the values in either its respective row or column in the adjacency matrix. Labeled graph Adjacency matrix ${\displaystyle {\begin{pmatrix}2&1&0&0&1&0\\1&0&1&0&1&0\\0&1&0&1&0&0\\0&0&1&0&1&1\\1&1&0&1&0&0\\0&0&0&1&0&0\end{pmatrix}}}$ Coordinates are 1–6. Coordinates are 0–23. White fields are zeros, colored fields are ones. ### Directed graphs In directed graphs, the in-degree of a vertex can be computed by summing the entries of the corresponding column, and the out-degree can be computed by summing the entries of the corresponding row. Labeled graph Adjacency matrix Coordinates are 0–23. As the graph is directed, the matrix is not necessarily symmetric. ### Trivial graphs The adjacency matrix of a complete graph contains all ones except along the diagonal where there are only zeros. The adjacency matrix of an empty graph is a zero matrix. ## Properties ### Spectrum The adjacency matrix of an undirected simple graph is symmetric, and therefore has a complete set of real eigenvalues and an orthogonal eigenvector basis. The set of eigenvalues of a graph is the spectrum of the graph.[5] It is common to denote the eigenvalues by ${\displaystyle \lambda _{1}\geq \lambda _{2}\geq \cdots \geq \lambda _{n}.}$ The greatest eigenvalue ${\displaystyle \lambda _{1}}$ is bounded above by the maximum degree. This can be seen as result of the Perron–Frobenius theorem, but it can be proved easily. Let v be one eigenvector associated to ${\displaystyle \lambda _{1}}$ and x the component in which v has maximum absolute value. Without loss of generality assume vx is positive since otherwise you simply take the eigenvector ${\displaystyle -v}$, also associated to ${\displaystyle \lambda _{1}}$. Then ${\displaystyle \lambda _{1}v_{x}=(Av)_{x}=\sum _{y=1}^{n}A_{x,y}v_{y}\leq \sum _{y=1}^{n}A_{x,y}v_{x}=v_{x}\deg(x).}$ For d-regular graphs, d is the first eigenvalue of A for the vector v = (1, …, 1) (it is easy to check that it is an eigenvalue and it is the maximum because of the above bound). The multiplicity of this eigenvalue is the number of connected components of G, in particular ${\displaystyle \lambda _{1}>\lambda _{2}}$ for connected graphs. It can be shown that for each eigenvalue ${\displaystyle \lambda _{i}}$, its opposite ${\displaystyle -\lambda _{i}=\lambda _{n+1-i}}$ is also an eigenvalue of A if G is a bipartite graph.[citation needed] In particular −d is an eigenvalue of bipartite graphs. The difference ${\displaystyle \lambda _{1}-\lambda _{2}}$ is called the spectral gap and it is related to the expansion of G. It is also useful to introduce the spectral radius of ${\displaystyle A}$ denoted by ${\displaystyle \lambda (G)=\max _{|\lambda _{i}|. This number is bounded by ${\displaystyle \lambda (G)\geq 2{\sqrt {d-1}}-o(1)}$. This bound is tight in the Ramanujan graphs, which have applications in many areas. ### Isomorphism and invariants Suppose two directed or undirected graphs G1 and G2 with adjacency matrices A1 and A2 are given. G1 and G2 are isomorphic if and only if there exists a permutation matrix P such that ${\displaystyle PA_{1}P^{-1}=A_{2}.}$ In particular, A1 and A2 are similar and therefore have the same minimal polynomial, characteristic polynomial, eigenvalues, determinant and trace. These can therefore serve as isomorphism invariants of graphs. However, two graphs may possess the same set of eigenvalues but not be isomorphic.[6] Such linear operators are said to be isospectral. ### Matrix powers If A is the adjacency matrix of the directed or undirected graph G, then the matrix An (i.e., the matrix product of n copies of A) has an interesting interpretation: the element (i, j) gives the number of (directed or undirected) walks of length n from vertex i to vertex j. If n is the smallest nonnegative integer, such that for some i, j, the element (i, j) of An is positive, then n is the distance between vertex i and vertex j. This implies, for example, that the number of triangles in an undirected graph G is exactly the trace of A3 divided by 6. Note that the adjacency matrix can be used to determine whether or not the graph is connected. ## Data structures The adjacency matrix may be used as a data structure for the representation of graphs in computer programs for manipulating graphs. The main alternative data structure, also in use for this application, is the adjacency list.[7][8] Because each entry in the adjacency matrix requires only one bit, it can be represented in a very compact way, occupying only |V |2/8 bytes to represent a directed graph, or (by using a packed triangular format and only storing the lower triangular part of the matrix) approximately |V |2/16 bytes to represent an undirected graph. Although slightly more succinct representations are possible, this method gets close to the information-theoretic lower bound for the minimum number of bits needed to represent all n-vertex graphs.[9] For storing graphs in text files, fewer bits per byte can be used to ensure that all bytes are text characters, for instance by using a Base64 representation.[10] Besides avoiding wasted space, this compactness encourages locality of reference. However, for a large sparse graph, adjacency lists require less storage space, because they do not waste any space to represent edges that are not present.[8][11] An alternative form of adjacency matrix (which, however, requires a larger amount of space) replaces the numbers in each element of the matrix with pointers to edge objects (when edges are present) or null pointers (when there is no edge).[11] It is also possible to store edge weights directly in the elements of an adjacency matrix.[8] Besides the space tradeoff, the different data structures also facilitate different operations. Finding all vertices adjacent to a given vertex in an adjacency list is as simple as reading the list, and takes time proportional to the number of neighbors. With an adjacency matrix, an entire row must instead be scanned, which takes a larger amount of time, proportional to the number of vertices in the whole graph. On the other hand, testing whether there is an edge between two given vertices can be determined at once with an adjacency matrix, while requiring time proportional to the minimum degree of the two vertices with the adjacency list.[8][11] ## References 1. ^ Biggs, Norman (1993), Algebraic Graph Theory, Cambridge Mathematical Library (2nd ed.), Cambridge University Press, Definition 2.1, p. 7. 2. ^ Harary, Frank (1962), "The determinant of the adjacency matrix of a graph", SIAM Review, 4 (3): 202–210, Bibcode:1962SIAMR...4..202H, doi:10.1137/1004057, MR 0144330. 3. ^ Seidel, J. J. (1968). "Strongly Regular Graphs with (−1, 1, 0) Adjacency Matrix Having Eigenvalue 3". Lin. Alg. Appl. 1 (2): 281–298. doi:10.1016/0024-3795(68)90008-6. 4. ^ Shum, Kenneth; Blake, Ian (2003-12-18). "Expander graphs and codes". Volume 68 of DIMACS series in discrete mathematics and theoretical computer science. Algebraic Coding Theory and Information Theory: DIMACS Workshop, Algebraic Coding Theory and Information Theory. American Mathematical Society. p. 63. 5. ^ Biggs (1993), Chapter 2 ("The spectrum of a graph"), pp. 7–13. 6. ^ Godsil, Chris; Royle, Gordon Algebraic Graph Theory, Springer (2001), ISBN 0-387-95241-1, p.164 7. ^ Goodrich & Tamassia (2015), p. 361: "There are two data structures that people often use to represent graphs, the adjacency list and the adjacency matrix." 8. ^ a b c d Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001), "Section 22.1: Representations of graphs", Introduction to Algorithms (Second ed.), MIT Press and McGraw-Hill, pp. 527–531, ISBN 0-262-03293-7. 9. ^ Turán, György (1984), "On the succinct representation of graphs", Discrete Applied Mathematics, 8 (3): 289–294, doi:10.1016/0166-218X(84)90126-4, MR 0749658. 10. ^ 11. ^ a b c Goodrich, Michael T.; Tamassia, Roberto (2015), Algorithm Design and Applications, Wiley, p. 363.
proofpile-shard-0030-332
{ "provenance": "003.jsonl.gz:333" }
# zbMATH — the first resource for mathematics Damped wave equation. (Equation des ondes amorties.) (French) Zbl 0863.58068 Boutet de Monvel, Anne (ed.) et al., Algebraic and geometric methods in mathematical physics. Proceedings of the 1st Ukrainian-French-Romanian summer school, Kaciveli, Ukraine, September 1-14, 1993. Dordrecht: Kluwer Academic Publishers. Math. Phys. Stud. 19, 73-109 (1996). Let $$(M,g)$$ be a $$C^\infty$$ compact Riemannian manifold with boundary, with Laplacian $$\Delta$$, and let $$a$$ be a $$C^\infty (M, \mathbb{R}^+)$$ function. One considers the evolution problem $\begin{cases} \bigl(\partial^2_t - \Delta+ 2a(x) \partial_t\bigr) u=0 \text{ in } \mathbb{R}_t \times M, u=0 \text{ on } \mathbb{R}_t \times \partial M, \\ u|_{t=0} = u_0\in H^1_0 (M), \quad {\partial u \over \partial t} |_{t=0} = u_1\in L^2(M).\end{cases} \tag{*}$ The author obtains sharp estimates for the resolvent of $$A_a = \left(\begin{smallmatrix} 0 & Id \\ \Delta & -2a \end{smallmatrix} \right)$$ and for the energy. The best exponential decay rate of the solutions of the evolution problem (*) is computed in terms of the spectrum and of the average of $$a(x)$$ on the geodesics of $$M$$. For the entire collection see [Zbl 0833.00031]. ##### MSC: 58J45 Hyperbolic equations on manifolds 35L05 Wave equation 35S15 Boundary value problems for PDEs with pseudodifferential operators 58J50 Spectral problems; spectral geometry; scattering theory on manifolds 53C22 Geodesics in global differential geometry
proofpile-shard-0030-333
{ "provenance": "003.jsonl.gz:334" }
# rotation to quaternion matrix handeness I've understand that quaternions do not have handness but rotation matricies derived from unit quaternions does. The following formula is given by wikipedia for quaternion to rotation matrix conversion : Given the unit quaternion $q = w + xi + yj + zk$ , the equivalent left-handed (Post-Multiplied) 3×3 rotation matrix is $$Q = \begin{bmatrix} 1 - 2 y^2 - 2 z^2 & 2 x y - 2 z w & 2 x z + 2 y w \\ 2 x y + 2 z w & 1 - 2 x^2 - 2 z^2 & 2 y z - 2 x w \\ 2 x z - 2 y w & 2 y z + 2 x w & 1 - 2 x^2 - 2 y^2 \end{bmatrix} .$$ As mentioned, this formula is relative to a left-handed coordinate frame. What's the right-handed counterpart ? Best : ) , • What have you tried? The difference between conventions is just a matter of quite straight forward calculations. – skyking Jan 29 '16 at 10:41 • I've a bug in my software and i want to search for possible candidates : i want to be 100% sure that i'm not missing something with those handness conventions. – jcolafrancesco Jan 29 '16 at 10:52 • Then it would maybe be a point to post what conventions you're using (fx your coordinate system, the way you apply rotation matrixes and the way you apply quaternions) and what formulas you're using. If nothing else it would show that you've done some own effort. – skyking Jan 29 '16 at 11:00 Hint: Given the unit quaternion $q = w + x \vec i + y\vec j + z \vec k=w+ \vec v$, the matrix $Q$ results from the representation of a rotation of a vector $\vec p=p_x \vec i+p_y \vec j+ p_z \vec k$ as: $$Q(\vec p)= q \vec p q^{-1}$$ The resulting rotation is a rotation of angle $\theta$ around an axis oriented by a versor $\vec u$ such that: $$\cos \frac{\theta}{2}=\frac{w}{|q|} \qquad \sin \frac{\theta}{2}=\frac{|\vec v|}{|q|}$$ and $$\vec u=\frac{\vec v}{|\vec v|}$$ This is a counter-clockwise rotation $R_{\vec u, \theta}$ around the axis $\vec u$. You can find the clockwise rotation around the same axis changing the angle to $-\theta$, or inverting the orientation of the versor $\vec u$ (note that if you perform all the two transformation the rotation remain the same). What does this means for the quaternion $q$ ? • I'm i right if i say that right-handed version is just the transpose of the left-handed ? – jcolafrancesco Jan 29 '16 at 13:31 • Change the sigh of $theta$ or the sigh of $\vec u$ is the same as take the conjugate of $q$, i.e. $\bar q=w-xi-yj-zk$ and this gives a matrix $Q'$ that is the transpose of $Q$... so : Yes! you are right. – Emilio Novati Jan 29 '16 at 13:39
proofpile-shard-0030-334
{ "provenance": "003.jsonl.gz:335" }
Function GMP::Coefficient::Get(GMP, row, column) # GMP::Coefficient::Get The function GMP::Coefficient::Get retrieves a (linear) coefficient in a generated mathematical program. GMP::Coefficient::Get( GMP, ! (input) a generated mathematical program row, ! (input) a scalar reference or row number column ! (input) a scalar reference or column number ) ## Arguments GMP An element in AllGeneratedMathematicalPrograms. row A scalar reference to an existing row in the model or the number of that row in the range $$\{ 0 .. m-1 \}$$ where $$m$$ is the number of rows in the matrix. column A scalar reference to an existing column in the model or the number of that column in the range $$\{ 0 .. n-1 \}$$ where $$n$$ is the number of columns in the matrix. ## Return Value The value of the specified coefficient in the generated mathematical program. Note In case the generated mathematical program is nonlinear, this function will return 0 if the column is part of a nonlinear term in the row. However, if the row is pure quadratic then this function will return the linear coefficient value for a quadratic column. ## Example Consider a GMP containing a constraint e1 with definition 2*x1 + 3*x2 + x2^3 = 0. Then GMP::Coefficient::Get(GMP,e1,x1) will return 2. Because column x2 is part of the nonlinear term x2^3, GMP::Coefficient::Get(GMP,e1,x2) will return 0.
proofpile-shard-0030-335
{ "provenance": "003.jsonl.gz:336" }
# distribution of a product of lognormal distributed random variables • June 24th 2011, 12:49 PM Juju distribution of a product of lognormal distributed random variables I'm sure, this question is very trivial. But I'm a little bit confused right now. $B=(B_t^1,\ldots,B_t^d)$ is a d-dimensional Brownian motion. $R_n:=\exp\left\{\mu + \sum\limits_{j=1}^d \sigma (B_{nh}^j-B^j_{(n-1)h})\right\} \quad n=1,2,\ldots,N$ $R:=\exp\left\{\mu + \sum\limits_{j=1}^d \sigma B^j_h\right\}$ $\mu, \sigma$ are just constants. Am I right, that $R_n$ are independent for $n=1,\ldots, N$ and $R_n\stackrel{d}{=}R$? whrere $d$ denotes equality in distribution.
proofpile-shard-0030-336
{ "provenance": "003.jsonl.gz:337" }
Non-linear Differential Equations and Psuedo-randomness linford86 I was thinking about the non-linear Navier-Stokes equation this morning and was briefly browsing a text on the subject. I'm aware that one popular approach to dealing with turbulence is to take averages and look at correlators (which, in turn, can be related to field theory.) Now, one thing which strikes me as odd about this statistical approach to dealing with turbulence is that the Navier-Stokes equation is fully deterministic; given some set of initial and boundary conditions, the entire time evolution of the system is determined. So, I have a question about the Navier-Stokes equation: is it known that the solutions are psuedo-random for high Reynold's number? Of course, one can extend this more generally to dynamical systems with positive Lyapunov exponents -- are the solutions to such systems (e.g., chaotic differential equations) known to be psuedo-random? If so, it would appear to be natural to attack them statistically since they would pass tests for randomness. Answers and Replies Eynstone Many solutions having a global attractor in 3D are pseudo-random. Howver, I haven't heard of a general theorem. linford86 Interesting. Do you have any examples? I'd like to know more about this subject. Eynstone Check the Lorenz attractor ,for instance. There are a few theorems connecting ergodicity & randomness, but the general case of the Navier-Stokes equation is monstrous. It's not known if a general solutions exists ( a million dollar problem) , let alone the contingent property of pseudorandomness. linford86 Yes, I'm aware of the related Millenium Problem. At any rate, does any one have any information connecting ergodicity and psuedo-randomness for other systems? That seems intriguing to me.
proofpile-shard-0030-337
{ "provenance": "003.jsonl.gz:338" }
## Zhang, Yan X - Four Variations on Graded Posets dmtcs:2492 - Discrete Mathematics & Theoretical Computer Science, January 1, 2015, DMTCS Proceedings, 27th International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2015) Authors: Zhang, Yan X We explore the enumeration of some natural classes of graded posets, including $(2 + 2)$-avoiding graded posets, $(3 + 1)$-avoiding graded posets, $(2 + 2)$- and $(3 + 1)$-avoiding graded posets, and the set of all graded posets. As part of this story, we discuss a situation when we can switch between enumeration of labeled and unlabeled objects with ease, which helps us generalize a result by Postnikov and Stanley from the theory of hyperplane arrangements, answer a question posed by Stanley, and see an old result of Klarner in a new light. Source : oai:HAL:hal-01337770v1 Volume: DMTCS Proceedings, 27th International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2015) Section: Proceedings Published on: January 1, 2015 Submitted on: November 21, 2016 Keywords: posets,combinatorics,generating functions,poset avoidance,linear algebra,[INFO.INFO-DM] Computer Science [cs]/Discrete Mathematics [cs.DM]
proofpile-shard-0030-338
{ "provenance": "003.jsonl.gz:339" }
# Is y=x a linear function? A linear function has the highest power of one. A quadratic has ${x}^{2}$ as the highest power etc
proofpile-shard-0030-339
{ "provenance": "003.jsonl.gz:340" }
# Iterative Minimization lemma proof 1. Dec 14, 2014 ### rayge 1. The problem statement, all variables and given/known data $f(x)$ is the function we want to minimize. Beyond being real-valued, there are no other conditions on it. (I'm surprised it's not at least continuous, but the book doesn't say that's a condition.) We choose the next $x^k$ through the relation $x^k = x^{k-1} + \alpha_{k}d^k$. We assume $d^k$ is a descent direction. That is, for small positive $\alpha$, $f(x^{k-1} + \alpha d^k)$ is decreasing. Here's the lemma we want to prove: When $x^k$ is constructed using the optimal $\alpha$, we have $\nabla f(x^k) \cdot d^k = 0$ 3. The attempt at a solution It's suggested in the book that we should differentiate the function $f(x^{k-1} + \alpha d^k)$ with respect to $\alpha$. My problem is that I don't know how to differentiate a function that isn't defined. My first guess went something like this, but I don't see how I'm any closer to a solution. $$\frac{\partial}{\partial \alpha} f(x^{k-1} + \alpha d^k) = \frac{\partial}{\partial \alpha} f(x^k)$$ $$f'(x^{k-1} + \alpha d^k)d^k = f'(x^k)(0)$$ $$f'(x^{k-1} + \alpha d^k)d^k = 0$$ I'm really not looking for an answer, but if someone could point me to where I could learn about taking differentials of undefined functions that would be helpful. I'm guessing that somehow I can extract a gradient out of this, and a dot product, but I'm feeling pretty confused. 2. Dec 14, 2014 ### Stephen Tashi If you're expected to differentiate $f$ it would have to be differentiable, hence continuous. You didn't say what $x^k$ is. I assume it is a vector. For example, $x^k = ( {x^k}_1, {x^k}_2 )$ Visualize all the arguments of $f$ . For example $f( {x^{k-1}}_1 + \alpha {d^k}_1, {x^{k-1}}_2 + \alpha {d^k}_2)$ Things might look more familiar In different notation. If we have a real valued function $f(x,y)$ of two real variables $x,y$ and $X(\alpha), \ Y(\alpha)$ are real valued functions of $\alpha$ then $\frac{df}{d\alpha} = \frac{\partial df}{dx}\frac{dX}{d\alpha} + \frac{\partial df}{dy} \frac{dY}{d\alpha}$ $= \nabla f \cdot (\frac{dX}{d\alpha},\frac{dY}{d\alpha})$ 3. Dec 14, 2014 ### Ray Vickson Taking derivatives does not make sense unless $f$ is differentiable, so automatically continuous---in fact, even smoother than continuous. Maybe the book does not say that, but it is part of the reader's assumed background. Furthermore (just to be picky), expressions like $\nabla f(x^k) \cdot d^k$ depend on even more smoothness: $f$ must be continuously differentiable (not just plain differentiable). If the book does not make this clear, that is a flaw; it should definitely cover these points somewhere. BTW: the stated result is false if $f$ is not continuously differentiable; there are some easy counterexamples. Last edited: Dec 14, 2014 4. Dec 14, 2014 ### rayge Actually it is made clear in the beginning on the chapter that we're dealing with quadratic functions of the form $f(x) = \frac{1}{2}\|Ax - b\|^2_2$. My fault for not reading closely enough. 5. Dec 14, 2014 ### Ray Vickson OK, but the result is actually true for any continuously-differentiable $f$. 6. Dec 14, 2014 ### rayge Good to know, thanks. I applied the chain rule as suggested, and got: $$\frac{d f(x^{k-1} + \alpha d^k)}{d\alpha} = \nabla f \cdot d^k$$ From here, it seems reasonable that $\frac{d f(x^{k-1} + \alpha d^k)}{d\alpha} = 0$ is our optimal value but I don't know how to prove that from our assumptions. For small $\alpha$, $f(x^{k-1} + \alpha d^k)$ is decreasing. For large enough $\alpha$, I think $f(x^{k-1} + \alpha d^k)$ is increasing (i.e. we pass by the optimal point). I just don't have a solid grasp on how the function works, and assuming it's increasing for large $\alpha$, why that is the case. Thanks again. 7. Dec 14, 2014 ### Ray Vickson You are searching for a minimum along a line, so you are trying to minimize a univariate function $\phi(\alpha)$, which just happens to be of the form $$\phi(\alpha) = f(x^{k-1} + \alpha d^k)$$ for some vector constants $x^{k-1}, d^k$. How do you minimize functions of one variable? 8. Dec 15, 2014 ### Ray Vickson I see that my previous response did not deal with all of your question. I cannot supply too many details without violating PF helping policies, but I can give a hint. In your specific case you have $f(\vec{x}) = || A \vec{x} - \vec{b}||^2$. So letting $\vec{y} = A \vec{x}^k - \vec{b}$ and $\vec{p} = A \vec{d}^k$, we have that $\phi(\alpha) = f(\vec{x}^k + \vec{d}^k \alpha)$ has the form $\phi(\alpha) = ||\vec{y} + \alpha \vec{p}||^2 = a + b \alpha + c \alpha^2$. How do you find $a, b, c$? Given that $\vec{d}^k$ is a descent direction, what does this say about the values or signs of $a,b,c$? What do these signs say about the behavior of $\phi(\alpha)$?
proofpile-shard-0030-340
{ "provenance": "003.jsonl.gz:341" }
\begin{align}&r=\sqrt{{x}^{2}+{y}^{2}} \\ &r=\sqrt{{0}^{2}+{4}^{2}} \\ &r=\sqrt{16} \\ &r=4 \end{align}. This in general is written for any complex number as: The product of two complex numbers in polar form is found by _____ their moduli and _____ their arguments multiplying, adding r₁(cosθ₁+i sinθ₁)/r₂(cosθ₂+i sinθ₂)= The polar form or trigonometric form of a complex number P is z = r (cos θ + i sin θ) The value "r" represents the absolute value or modulus of the complex number … The absolute value $z$ is 5. The rectangular form of the given number in complex form is $12+5i$. Many amazing properties of complex numbers are revealed by looking at them in polar form!Let’s learn how to convert a complex number into polar … If ${z}_{1}={r}_{1}\left(\cos {\theta }_{1}+i\sin {\theta }_{1}\right)$ and ${z}_{2}={r}_{2}\left(\cos {\theta }_{2}+i\sin {\theta }_{2}\right)$, then the product of these numbers is given as: \begin{align}{z}_{1}{z}_{2}&={r}_{1}{r}_{2}\left[\cos \left({\theta }_{1}+{\theta }_{2}\right)+i\sin \left({\theta }_{1}+{\theta }_{2}\right)\right] \\ {z}_{1}{z}_{2}&={r}_{1}{r}_{2}\text{cis}\left({\theta }_{1}+{\theta }_{2}\right) \end{align}. For the rest of this section, we will work with formulas developed by French mathematician Abraham de Moivre (1667-1754). The n th Root Theorem Complex Numbers In Polar Form De Moivre's Theorem, Products, Quotients, Powers, and nth Roots Prec - Duration: 1:14:05. To better understand the product of complex numbers, we first investigate the trigonometric (or polar) form of a complex number. Free Complex Numbers Calculator - Simplify complex expressions using algebraic rules step-by-step This website uses cookies to ensure you get the best experience. Converting between the algebraic form ( + ) and the polar form of complex numbers is extremely useful. Plot the complex number $2 - 3i$ in the complex plane. Plot the point in the complex plane by moving $a$ units in the horizontal direction and $b$ units in the vertical direction. Substitute the results into the formula: $z=r\left(\cos \theta +i\sin \theta \right)$. θ is the argument of the complex number. Then, multiply through by $r$. Where: 2. Using the formula $\tan \theta =\frac{y}{x}$ gives, \begin{align}&\tan \theta =\frac{1}{1} \\ &\tan \theta =1 \\ &\theta =\frac{\pi }{4} \end{align}. Find the rectangular form of the complex number given $r=13$ and $\tan \theta =\frac{5}{12}$. Subtraction is... To multiply complex numbers in polar form, multiply the magnitudes and add the angles. This polar form is represented with the help of polar coordinates of real and imaginary numbers in the coordinate system. \\ &{z}^{\frac{1}{3}}=2\left(\cos \left(\frac{8\pi }{9}\right)+i\sin \left(\frac{8\pi }{9}\right)\right) \end{align}[/latex], \begin{align}&{z}^{\frac{1}{3}}=2\left[\cos \left(\frac{2\pi }{9}+\frac{12\pi }{9}\right)+i\sin \left(\frac{2\pi }{9}+\frac{12\pi }{9}\right)\right]&& \text{Add }\frac{2\left(2\right)\pi }{3}\text{ to each angle.} Find θ1 − θ2. Hence, the polar form of 7-5i is represented by: Suppose we have two complex numbers, one in a rectangular form and one in polar form. Complex Number Calculator The calculator will simplify any complex expression, with steps shown. [latex]z=3\left(\cos \left(\frac{\pi }{2}\right)+i\sin \left(\frac{\pi }{2}\right)\right). We begin by evaluating the trigonometric expressions. The only qualification is that all variables must be expressed in complex form, taking into account phase as well as magnitude, and all voltages and currents must be of the same frequency (in order that their phas… The absolute value of a complex number is the same as its magnitude. In this explainer, we will discover how converting to polar form can seriously simplify certain calculations with complex numbers. Notice that the absolute value of a real number gives the distance of the number from 0, while the absolute value of a complex number gives the distance of the number from the origin, $\left(0,\text{ }0\right)$. Finding Roots of Complex Numbers in Polar Form. Plotting a complex number $a+bi$ is similar to plotting a real number, except that the horizontal axis represents the real part of the number, $a$, and the vertical axis represents the imaginary part of the number, $bi$. $\begin{gathered}\cos \left(\frac{\pi }{6}\right)=\frac{\sqrt{3}}{2}\\\sin \left(\frac{\pi }{6}\right)=\frac{1}{2}\end{gathered}$, After substitution, the complex number is, $z=12\left(\frac{\sqrt{3}}{2}+\frac{1}{2}i\right)$, \begin{align}z&=12\left(\frac{\sqrt{3}}{2}+\frac{1}{2}i\right) \\ &=\left(12\right)\frac{\sqrt{3}}{2}+\left(12\right)\frac{1}{2}i \\ &=6\sqrt{3}+6i \end{align}. Write $z=\sqrt{3}+i$ in polar form. Given $z=x+yi$, a complex number, the absolute value of $z$ is defined as, $|z|=\sqrt{{x}^{2}+{y}^{2}}$. Substituting, we have. In the last tutorial about Phasors, we saw that a complex number is represented by a real part and an imaginary part that takes the generalised form of: 1. To find the quotient of two complex numbers in polar form, find the quotient of the two moduli and the difference of the two angles. To convert into polar form modulus and argument of the given complex number, i.e. We add $\frac{2k\pi }{n}$ to $\frac{\theta }{n}$ in order to obtain the periodic roots. Get the free "Convert Complex Numbers to Polar Form" widget for your website, blog, Wordpress, Blogger, or iGoogle. Because and because lies in Quadrant III, you choose θ to be θ = π + π/3 = 4π/3. There are several ways to represent a formula for finding roots of complex numbers in polar form. When we use these formulas, we turn a complex number, a + bi, into its polar form of z = r (cos (theta) + i*sin (theta)) where a = r*cos (theta) and b = r*sin (theta). Find the polar form of $-4+4i$. Convert the polar form of the given complex number to rectangular form: $z=12\left(\cos \left(\frac{\pi }{6}\right)+i\sin \left(\frac{\pi }{6}\right)\right)$. Your email address will not be published. Enter ( 6 + 5 . ) Next, we look at $x$. \begin{align}z&=13\left(\cos \theta +i\sin \theta \right) \\ &=13\left(\frac{12}{13}+\frac{5}{13}i\right) \\ &=12+5i \end{align}. Given $z=1 - 7i$, find $|z|$. The Organic Chemistry Tutor 364,283 views If $z=r\left(\cos \theta +i\sin \theta \right)$ is a complex number, then, \begin{align}&{z}^{n}={r}^{n}\left[\cos \left(n\theta \right)+i\sin \left(n\theta \right)\right]\\ &{z}^{n}={r}^{n}\text{cis}\left(n\theta \right)\end{align}. Lets connect three AC voltage sources in series and use complex numbers to determine additive voltages. Cos θ = Adjacent side of the angle θ/Hypotenuse, Also, sin θ = Opposite side of the angle θ/Hypotenuse. Multiplication of complex numbers is more complicated than addition of complex numbers. Find quotients of complex numbers in polar form. REVIEW: To add complex numbers in rectangular form, add the real components and add the imaginary components. Let us learn here, in this article, how to derive the polar form of complex numbers. Find the absolute value of the complex number $z=12 - 5i$. If then becomes e^ {i\theta}=\cos {\theta}+i\sin {\theta} Do … Usually, we represent the complex numbers, in the form of z = x+iy where ‘i’ the imaginary number. When $k=0$, we have, ${z}^{\frac{1}{3}}=2\left(\cos \left(\frac{2\pi }{9}\right)+i\sin \left(\frac{2\pi }{9}\right)\right)$, \begin{align}&{z}^{\frac{1}{3}}=2\left[\cos \left(\frac{2\pi }{9}+\frac{6\pi }{9}\right)+i\sin \left(\frac{2\pi }{9}+\frac{6\pi }{9}\right)\right] && \text{ Add }\frac{2\left(1\right)\pi }{3}\text{ to each angle.} Given a complex number in rectangular form expressed as [latex]z=x+yi, we use the same conversion formulas as we do to write the number in trigonometric form: We review these relationships in Figure 5. Replace r with r1 r2, and replace θ with θ1 − θ2. The polar form of a complex number is another way to represent a complex number. The rectangular form of the given point in complex form is $6\sqrt{3}+6i$. Evaluate the trigonometric functions, and multiply using the distributive property. To divide complex numbers in polar form we need to divide the moduli and subtract the arguments. The polar form of a complex number is another way of representing complex numbers.. Use De Moivre’s Theorem to evaluate the expression. Each complex number corresponds to a point (a, b) in the complex plane. Finding Roots of Complex Numbers in Polar Form To find the nth root of a complex number in polar form, we use the Root Theorem or De Moivre’s Theorem and raise the complex number to a power with a rational exponent. We first encountered complex numbers in Precalculus I. Hence, it can be represented in a cartesian plane, as given below: Here, the horizontal axis denotes the real axis, and the vertical axis denotes the imaginary axis. Plot the point $1+5i$ in the complex plane. The polar form of a complex number is. Find the product and the quotient of ${z}_{1}=2\sqrt{3}\left(\cos \left(150^\circ \right)+i\sin \left(150^\circ \right)\right)$ and ${z}_{2}=2\left(\cos \left(30^\circ \right)+i\sin \left(30^\circ \right)\right)$. NOTE: If you set the calculator to return polar form, you can press Enter and the calculator will convert this number to polar form. To find the nth root of a complex number in polar form, we use the $n\text{th}$ Root Theorem or De Moivre’s Theorem and raise the complex number to a power with a rational exponent. A complex number on the polar form can be expressed as Z = r (cosθ + j sinθ) (3) where r = modulus (or magnitude) of Z - and is written as "mod Z" or |Z| θ = argument(or amplitude) of Z - and is written as "arg Z" r can be determined using Pythagoras' theorem r = (a2 + b2)1/2(4) θcan be determined by trigonometry θ = tan-1(b / a) (5) (3)can also be expressed as Z = r ej θ(6) As we can se from (1), (3) and (6) - a complex number can be written in three different ways. Evaluate the expression ${\left(1+i\right)}^{5}$ using De Moivre’s Theorem. The absolute value of a complex number is the same as its magnitude, or $|z|$. \begin{align}&\frac{{z}_{1}}{{z}_{2}}=\frac{2}{4}\left[\cos \left(213^\circ -33^\circ \right)+i\sin \left(213^\circ -33^\circ \right)\right] \\ &\frac{{z}_{1}}{{z}_{2}}=\frac{1}{2}\left[\cos \left(180^\circ \right)+i\sin \left(180^\circ \right)\right] \\ &\frac{{z}_{1}}{{z}_{2}}=\frac{1}{2}\left[-1+0i\right] \\ &\frac{{z}_{1}}{{z}_{2}}=-\frac{1}{2}+0i \\ &\frac{{z}_{1}}{{z}_{2}}=-\frac{1}{2} \end{align}. The real and complex components of coordinates are found in terms of r and θ where r is the length of the vector, and θ is the angle made with the real axis. Then, multiply through by $r$. When dividing complex numbers in polar form, we divide the r terms and subtract the angles. But in polar form, the complex numbers are represented as the combination of modulus and argument. Nonzero complex numbers written in polar form are equal if and only if they have the same magnitude and their arguments differ by an integer multiple of 2 π . Next, we will look at how we can describe a complex number slightly differently – instead of giving the and coordinates, we will give a distance (the modulus) and angle (the argument). Every real number graphs to a unique point on the real axis. Find the quotient of ${z}_{1}=2\left(\cos \left(213^\circ \right)+i\sin \left(213^\circ \right)\right)$ and ${z}_{2}=4\left(\cos \left(33^\circ \right)+i\sin \left(33^\circ \right)\right)$. Thus, the polar form is Let us consider (x, y) are the coordinates of complex numbers x+iy. Entering complex numbers in rectangular form: To enter: 6+5j in rectangular form. It is the distance from the origin to the point: $|z|=\sqrt{{a}^{2}+{b}^{2}}$. To convert from polar form to rectangular form, first evaluate the trigonometric functions. It is also in polar form. In this section, we will focus on the mechanics of working with complex numbers: translation of complex numbers from polar form to rectangular form and vice versa, interpretation of complex numbers in the scheme of applications, and application of De Moivre’s Theorem. Find the product of ${z}_{1}{z}_{2}$, given ${z}_{1}=4\left(\cos \left(80^\circ \right)+i\sin \left(80^\circ \right)\right)$ and ${z}_{2}=2\left(\cos \left(145^\circ \right)+i\sin \left(145^\circ \right)\right)$. r and θ. Converting Complex Numbers to Polar Form. \displaystyle z= r (\cos {\theta}+i\sin {\theta)} . Z - is the Complex Number representing the Vector 3. x - is the Real part or the Active component 4. y - is the Imaginary part or the Reactive component 5. j - is defined by √-1In the rectangular form, a complex number can be represented as a point on a two dimensional plane calle… If ${z}_{1}={r}_{1}\left(\cos {\theta }_{1}+i\sin {\theta }_{1}\right)$ and ${z}_{2}={r}_{2}\left(\cos {\theta }_{2}+i\sin {\theta }_{2}\right)$, then the quotient of these numbers is, \begin{align}&\frac{{z}_{1}}{{z}_{2}}=\frac{{r}_{1}}{{r}_{2}}\left[\cos \left({\theta }_{1}-{\theta }_{2}\right)+i\sin \left({\theta }_{1}-{\theta }_{2}\right)\right],{z}_{2}\ne 0\\ &\frac{{z}_{1}}{{z}_{2}}=\frac{{r}_{1}}{{r}_{2}}\text{cis}\left({\theta }_{1}-{\theta }_{2}\right),{z}_{2}\ne 0\end{align}. Notice that the moduli are divided, and the angles are subtracted. The modulus, then, is the same as $r$, the radius in polar form. The rules are based on multiplying the moduli and adding the arguments. \begin{align}&{z}_{1}{z}_{2}=4\cdot 2\left[\cos \left(80^\circ +145^\circ \right)+i\sin \left(80^\circ +145^\circ \right)\right] \\ &{z}_{1}{z}_{2}=8\left[\cos \left(225^\circ \right)+i\sin \left(225^\circ \right)\right] \\ &{z}_{1}{z}_{2}=8\left[\cos \left(\frac{5\pi }{4}\right)+i\sin \left(\frac{5\pi }{4}\right)\right] \\ {z}_{1}{z}_{2}=8\left[-\frac{\sqrt{2}}{2}+i\left(-\frac{\sqrt{2}}{2}\right)\right] \\ &{z}_{1}{z}_{2}=-4\sqrt{2}-4i\sqrt{2} \end{align}. Express $z=3i$ as $r\text{cis}\theta$ in polar form. If $\tan \theta =\frac{5}{12}$, and $\tan \theta =\frac{y}{x}$, we first determine $r=\sqrt{{x}^{2}+{y}^{2}}=\sqrt{{12}^{2}+{5}^{2}}=13\text{. [latex]{z}_{0}=2\left(\cos \left(30^\circ \right)+i\sin \left(30^\circ \right)\right)$, ${z}_{1}=2\left(\cos \left(120^\circ \right)+i\sin \left(120^\circ \right)\right)$, ${z}_{2}=2\left(\cos \left(210^\circ \right)+i\sin \left(210^\circ \right)\right)$, ${z}_{3}=2\left(\cos \left(300^\circ \right)+i\sin \left(300^\circ \right)\right)$, $\begin{gathered}x=r\cos \theta \\ y=r\sin \theta \\ r=\sqrt{{x}^{2}+{y}^{2}} \end{gathered}$, \begin{align}&z=x+yi \\ &z=r\cos \theta +\left(r\sin \theta \right)i \\ &z=r\left(\cos \theta +i\sin \theta \right) \end{align}, CC licensed content, Specific attribution, http://cnx.org/contents/[email protected]:1/Preface. Finding powers of complex numbers is greatly simplified using De Moivre’s Theorem. \\ &{z}^{\frac{1}{3}}=2\left(\cos \left(\frac{14\pi }{9}\right)+i\sin \left(\frac{14\pi }{9}\right)\right)\end{align}[/latex], Remember to find the common denominator to simplify fractions in situations like this one. Writing a complex number in polar form involves the following conversion formulas: $\begin{gathered} x=r\cos \theta \\ y=r\sin \theta \\ r=\sqrt{{x}^{2}+{y}^{2}} \end{gathered}$, \begin{align}&z=x+yi \\ &z=\left(r\cos \theta \right)+i\left(r\sin \theta \right) \\ &z=r\left(\cos \theta +i\sin \theta \right) \end{align}. By … Find more Mathematics widgets in Wolfram|Alpha. The horizontal axis is the real axis and the vertical axis is the imaginary axis. \begin{align}&r=\sqrt{{x}^{2}+{y}^{2}} \\ &r=\sqrt{{\left(-4\right)}^{2}+\left({4}^{2}\right)} \\ &r=\sqrt{32} \\ &r=4\sqrt{2} \end{align}. And then the imaginary parts-- we have a 2i. The modulus of a complex number is also called absolute value. The rectangular form of a complex number is denoted by: In the case of a complex number, r signifies the absolute value or modulus and the angle θ is known as the argument of the complex number. where $n$ is a positive integer. Writing it in polar form, we have to calculate $r$ first. Complex numbers in the form $a+bi$ are plotted in the complex plane similar to the way rectangular coordinates are plotted in the rectangular plane. The polar form of a complex number expresses a number in terms of an angle θ\displaystyle \theta θ and its distance from the origin r\displaystyle rr. In other words, given $z=r\left(\cos \theta +i\sin \theta \right)$, first evaluate the trigonometric functions $\cos \theta$ and $\sin \theta$. Therefore, if we add the two given complex numbers, we get; Again, to convert the resulting complex number in polar form, we need to find the modulus and argument of the number. \begin{align}&r=\sqrt{{x}^{2}+{y}^{2}} \\ &r=\sqrt{{\left(1\right)}^{2}+{\left(1\right)}^{2}} \\ &r=\sqrt{2} \end{align}, Then we find $\theta$. The exponential number raised to a Complex number is more easily handled when we convert the Complex number to Polar form where is the Real part and is the radius or modulus and is the Imaginary part with as the argument. Explanation: The figure below shows a complex number plotted on the complex plane. Find the angle $\theta$ using the formula: \begin{align}&\cos \theta =\frac{x}{r} \\ &\cos \theta =\frac{-4}{4\sqrt{2}} \\ &\cos \theta =-\frac{1}{\sqrt{2}} \\ &\theta ={\cos }^{-1}\left(-\frac{1}{\sqrt{2}}\right)=\frac{3\pi }{4} \end{align}. So we can write the polar form of a complex number as: x + y j = r ( cos ⁡ θ + j sin ⁡ θ) \displaystyle {x}+ {y} {j}= {r} {\left ( \cos {\theta}+ {j}\ \sin {\theta}\right)} x+yj = r(cosθ+ j sinθ) r is the absolute value (or modulus) of the complex number. The polar form of a complex number expresses a number in terms of an angle $\theta$ and its distance from the origin $r$. From the origin, move two units in the positive horizontal direction and three units in the negative vertical direction. Hence. To find the power of a complex number ${z}^{n}$, raise $r$ to the power $n$, and multiply $\theta$ by $n$. It measures the distance from the origin to a point in the plane. Then, $z=r\left(\cos \theta +i\sin \theta \right)$. We use the term modulus to represent the absolute value of a complex number, or the distance from the origin to the point $\left(x,y\right)$. Let r and θ be polar coordinates of the point P(x, y) that corresponds to a non-zero complex number z = x + iy . We call this the polar form of a complex number.. where $k=0,1,2,3,…,n - 1$. We're asked to add the complex number 5 plus 2i to the other complex number 3 minus 7i. The argument, in turn, is affected so that it adds himself the same number of times as the potency we are raising. \begin{align}&{z}^{\frac{1}{3}}={8}^{\frac{1}{3}}\left[\cos \left(\frac{\frac{2\pi }{3}}{3}+\frac{2k\pi }{3}\right)+i\sin \left(\frac{\frac{2\pi }{3}}{3}+\frac{2k\pi }{3}\right)\right] \\ &{z}^{\frac{1}{3}}=2\left[\cos \left(\frac{2\pi }{9}+\frac{2k\pi }{3}\right)+i\sin \left(\frac{2\pi }{9}+\frac{2k\pi }{3}\right)\right] \end{align}, There will be three roots: $k=0,1,2$. We use $\theta$ to indicate the angle of direction (just as with polar coordinates). CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, Solution Of Quadratic Equation In Complex Number System, Argand Plane And Polar Representation Of Complex Number, Important Questions Class 8 Maths Chapter 9 Algebraic Expressions and Identities, Important Topics and Tips Prepare for Class 12 Maths Exam, CBSE Previous Year Question Papers Class 12 Maths, CBSE Previous Year Question Papers Class 10 Maths, ICSE Previous Year Question Papers Class 10, ISC Previous Year Question Papers Class 12 Maths. To find the potency of a complex number in polar form one simply has to do potency asked by the module. Calculate the new trigonometric expressions and multiply through by $r$. It is the distance from the origin to the point $\left(x,y\right)$. It is the standard method used in modern mathematics. We often use the abbreviation $r\text{cis}\theta$ to represent $r\left(\cos \theta +i\sin \theta \right)$. and the angle θ is given by . Let us find $r$. Now that we can convert complex numbers to polar form we will learn how to perform operations on complex numbers in polar form. Below is a summary of how we convert a complex number from algebraic to polar form. Example: Find the polar form of complex number 7-5i. Write the complex number in polar form. Polar form is where a complex number is denoted by the length (otherwise known as the magnitude, absolute value, or modulus) and the angle of its vector (usually denoted by … In polar coordinates, the complex number $z=0+4i$ can be written as $z=4\left(\cos \left(\frac{\pi }{2}\right)+i\sin \left(\frac{\pi }{2}\right)\right)$ or $4\text{cis}\left(\frac{\pi }{2}\right)$. The form z=a+bi is the rectangular form of a complex number. where $r$ is the modulus and $\theta$ is the argument. On the complex plane, the number $z=4i$ is the same as $z=0+4i$. There are several ways to represent a formula for finding $$n^{th}$$ roots of complex numbers in polar form. Then a new complex number is obtained. Therefore, the required complex number is 12.79∠54.1°. Example 1. Entering complex numbers in polar form: But in polar form, the complex numbers are represented as the combination of modulus and argument. Notice that the product calls for multiplying the moduli and adding the angles. For example, the graph of $z=2+4i$, in Figure 2, shows $|z|$. \begin{align}&{\left(a+bi\right)}^{n}={r}^{n}\left[\cos \left(n\theta \right)+i\sin \left(n\theta \right)\right]\\ &{\left(1+i\right)}^{5}={\left(\sqrt{2}\right)}^{5}\left[\cos \left(5\cdot \frac{\pi }{4}\right)+i\sin \left(5\cdot \frac{\pi }{4}\right)\right] \\ &{\left(1+i\right)}^{5}=4\sqrt{2}\left[\cos \left(\frac{5\pi }{4}\right)+i\sin \left(\frac{5\pi }{4}\right)\right] \\ &{\left(1+i\right)}^{5}=4\sqrt{2}\left[-\frac{\sqrt{2}}{2}+i\left(-\frac{\sqrt{2}}{2}\right)\right] \\ &{\left(1+i\right)}^{5}=-4 - 4i \end{align}. The n th Root Theorem Find the absolute value of $z=\sqrt{5}-i$. Divide $\frac{{r}_{1}}{{r}_{2}}$. The quotient of two complex numbers in polar form is the quotient of the two moduli and the difference of the two arguments. So let's add the real parts. Convert the complex number to rectangular form: $z=4\left(\cos \frac{11\pi }{6}+i\sin \frac{11\pi }{6}\right)$. Finding Roots of Complex Numbers in Polar Form To find the nth root of a complex number in polar form, we use the Root Theorem or De Moivre’s Theorem and raise the complex number to a power with a rational exponent. ${z}_{1}{z}_{2}=-4\sqrt{3};\frac{{z}_{1}}{{z}_{2}}=-\frac{\sqrt{3}}{2}+\frac{3}{2}i$. Replace $r$ with $\frac{{r}_{1}}{{r}_{2}}$, and replace $\theta$ with ${\theta }_{1}-{\theta }_{2}$. \begin{align}&|z|=\sqrt{{x}^{2}+{y}^{2}}\\ &|z|=\sqrt{{\left(\sqrt{5}\right)}^{2}+{\left(-1\right)}^{2}} \\ &|z|=\sqrt{5+1} \\ &|z|=\sqrt{6} \end{align}. There are several ways to represent a formula for finding $n\text{th}$ roots of complex numbers in polar form. The polar form of a complex number is a different way to represent a complex number apart from rectangular form. Complex numbers answered questions that for centuries had puzzled the greatest minds in science. Dividing complex numbers in polar form. Converting a complex number from polar form to rectangular form is a matter of evaluating what is given and using the distributive property. See the Products and Quotients section for more information.) Represented as shown in the form of a complex number from polar to rectangular form of direction just... Then becomes e^ { i\theta } =\cos { \theta } do … Converting complex numbers in form... From rectangular form: to enter: 6+5j in rectangular form replace θ with θ1 θ2... The figure below that it adds himself the same as its magnitude Converting...: z = a + bi basic forms of complex number apart from rectangular form, to! Will illustrate that point coordinates ) De Moivre 's Theorem, Products, Quotients, powers, and roots... This explainer, we will learn how to perform operations on complex numbers to polar form z. Help of polar coordinates of real and imaginary numbers are represented as the combination modulus. A zero imaginary part: a adding complex numbers in polar form bi can be graphed on a complex number -! Duration: 1:14:05 we call this the polar form, the complex plane replace r with r1 r2, 7∠50°... And [ latex ] z=\sqrt { 5 } -i [ /latex ] as [ latex ] [... Replace θ with θ1 − θ2 one simply has to do potency asked by module... So that it adds himself the same as raising a complex number from algebraic to polar.... The combined impedance is Dividing complex numbers, multiply through by [ latex ] [., move two units in the positive horizontal direction and three units the! Let us find [ latex ] r [ /latex ] multiply complex numbers are represented as the of... This section, we first need some kind of standard mathematical notation from rectangular form of a complex is... \Theta } _ { 1 } - { \theta } _ { 1 } - { )! 2 } [ /latex ] is a different way to represent a for... To rectangular form is represented with the help of polar coordinates of complex numbers to polar is... Widget for your website, blog, Wordpress, Blogger, or [ latex ] |z| [ ]! French mathematician Abraham De Moivre 's Theorem, Products, Quotients, powers and! ] |z| [ /latex ] modulus, then, multiply the magnitudes and add the angles are subtracted complex... These formulas have made working with a complex number himself the same as its magnitude ] 12+5i [ /latex.. Be graphed on a complex number is the same as raising a complex coordinate plane real axis the!, also known as Cartesian coordinates were first given by Rene Descartes adding complex numbers in polar form the positive horizontal direction and units. Multiply using the distributive property r ( \cos \theta +i\sin \theta \right ) [ /latex.! The coordinates of complex numbers in polar form modulus and [ latex ] \theta /latex... Step-By-Step this website uses cookies to ensure you get the best experience what is given using! - { \theta } do … Converting complex numbers, in this article, how to perform operations on numbers., b ) in the form z=a+bi is the same as its,. 1 - Dividing complex numbers in the plane that point rest of this section, we first need kind..., then, multiply through by [ latex ] r [ /latex ], the. N [ /latex ], the polar form again this polar form two complex numbers in polar form point... Using the distributive property [ /latex ], is affected so that adds! The complex plane also known as Cartesian coordinates were first given by Rene Descartes in the complex plane the ... Add these two numbers and represent in the positive horizontal direction and three units in the form of complex in... The positive horizontal direction and three units in the complex numbers + π/3 = 4π/3 it measures the from. Matter of evaluating what is given and using the distributive property conclude that the product of complex is... At [ latex ] r [ /latex ] by Rene Descartes in form... Do … Converting complex numbers algebraic form ( + ) and the.. If then becomes \$ e^ { i\theta } =\cos { \theta } +i\sin { \theta ).. Answered questions that for centuries had puzzled the greatest minds in science every number... S Theorem, powers, and multiply through by [ latex ] z=\sqrt { 5 } -i /latex. Let us find [ latex ] r [ /latex ] +i\sin \theta \right ) [ /latex.. Divided, and multiply through by [ latex ] r [ /latex ] indicate! A power, but using a rational exponent vertical direction ] k=0,1,2,3, …, -! Expressions using algebraic rules step-by-step this website uses cookies to ensure you get the free complex... An example that will illustrate that point graphs to a power, using... Forms of complex numbers in polar form and three units in the 17th century vertical.! Numbers calculator - simplify complex expressions using algebraic rules step-by-step this website uses cookies to ensure you the. Toward working with adding complex numbers in polar form complex number as Cartesian coordinates were first given by Descartes. - Duration: 1:14:05 and rectangular imaginary parts -- we have to calculate [ latex ] n /latex. Θ1 − θ2 the horizontal axis is the same as its magnitude, or iGoogle form one simply to... For centuries had puzzled the greatest minds in science π/3 = 4π/3: [ latex ] |z| [ /latex...., b ) in the complex numbers in the figure below divide complex numbers is another to. Simplified using De Moivre 's Theorem, Products, Quotients, powers and! R with r1 r2, and roots of complex numbers in polar form again -! = Opposite side of the given point in the form z = where... Of complex numbers answered questions that for centuries had puzzled the greatest minds in science potency we are.! Convert complex numbers to polar form, we first investigate the trigonometric functions: and... Imaginary parts -- we have a zero imaginary part: a + can... Angle of direction ( just as with polar coordinates ) coordinates were first given by Descartes. Representation of a complex number to a power, but using a rational exponent part. N [ /latex ] \theta \right ) [ /latex ]: 6+5j in rectangular form multiply... Horizontal direction and three units in the positive horizontal direction and three units in the complex plane the... Ensure you get the free convert complex numbers in polar form to rectangular form: enter... Made working with Products, Quotients, powers, and nth roots Prec - Duration: 1:14:05 of we. Find the potency we are raising and add the two moduli and subtract the angles is [ latex 2! Blog, Wordpress, Blogger, or [ latex ] z=12 - 5i [ /latex ] can seriously simplify calculations... The Products and Quotients section for more information. this website uses cookies to ensure you get the . } \theta [ /latex ] to indicate the angle θ/Hypotenuse the argument convert from polar.. ] z=3 - 4i [ /latex ] \cos \theta +i\sin \theta \right [! In the complex plane consisting of the numbers that have a zero real part:0 + bi be... Quadrant III, you choose θ to be θ = π + π/3 =.. Same number of times as the potency we are raising the line in the positive horizontal and... Known as Cartesian coordinates were first given by Rene Descartes in the form +! We are raising it adds himself the same as [ latex ] 1+5i [ /latex ] quotient of two numbers... Corresponds to a point in complex form is a summary of how we convert a complex number in complex is... How we convert a complex number to a point ( a, b ) in the polar form of complex! The horizontal axis is the imaginary parts -- we have a 2i than they appear certain calculations with complex to... We look at [ latex ] |z| [ /latex ] in the form of a complex in! Investigate the trigonometric functions 6\sqrt { 3 } +i [ /latex ] - [... Us consider ( x, y ) are the coordinates of real and imaginary numbers are as! 5I [ /latex ], b ) in the plane by Rene Descartes the! With these complex numbers is more complicated than addition of complex numbers in form. And multiply using the distributive property in Quadrant III, you choose θ to be θ = Adjacent side the... Number notation: polar and rectangular, the polar form widget for your website blog! 'S Theorem, Products, Quotients, powers, and 7∠50° are the coordinates of real imaginary! Coordinates of complex numbers are represented as shown in the complex plane horizontal axis is same! Is Converting between the algebraic form ( + ) and the angles of direction ( just as with coordinates. Rectangular coordinates, also known as Cartesian coordinates adding complex numbers in polar form first given by Rene Descartes the... ] using polar coordinates ) or iGoogle point in the plane Adjacent side of angle! Explainer, we adding complex numbers in polar form at [ latex ] n [ /latex ] polar. To represent a complex number is the same as [ latex ] z=1 - 7i [ /latex ] ] [! Real and imaginary numbers in polar form, find [ latex ] |z| [ /latex ] ] z=\sqrt { }. Drawing vectors, we first investigate the trigonometric functions will illustrate that point the origin to unique. The trigonometric functions z=a+bi is the same as its magnitude by the module first by... On a complex number from polar to rectangular form see the Products Quotients! ) form of a complex number [ latex ] r [ /latex ], the radius in polar form divide! adding complex numbers in polar form 2021
proofpile-shard-0030-341
{ "provenance": "003.jsonl.gz:342" }
Subjects -> STATISTICS (Total: 130 journals) The end of the list has been reached or no journals were found for your choice. Similar Journals Decisions in Economics and FinanceJournal Prestige (SJR): 0.116 Number of Followers: 15      Hybrid journal (It can contain Open Access articles) ISSN (Print) 1129-6569 - ISSN (Online) 1593-8883 Published by Springer-Verlag  [2467 journals] • Quasivariational inequalities for dynamic competitive economic equilibrium problems in discrete time Abstract: Abstract Equilibrium is a central concept in numerous disciplines including economics, management science, operations research, and engineering. We are concerned with an evolutionary quasivariational inequality which is connected to discrete dynamic competitive economic equilibrium problem in terms of maximization of utility functions and of excess demand functions. We study the discrete equilibrium problem by means of a discrete time-dependent quasivariational inequality in the discrete space $$\ell ^2([0,T]_{\mathbb {Z}},\mathbb {R})$$ . We ensure an existence result of discrete time-dependent equilibrium solutions. Finally, we show the stability of equilibrium in a completely decentralized Walrasian general equilibrium economy in which prices are fully controlled by economic agents, with production and trade occurring out of equilibrium. PubDate: 2023-01-24 representative agents Abstract: Abstract This paper focuses on defined-benefit pension funds in which heterogeneous plan members differ in age, salary, contribution rate, and other characteristics. The co-variation of these characteristics proves to have an important effect on the management of the fund. For example, we find that members’ ages and salary growths, if co-vary in unfavourable way, can substantially increase the funds’ liability, which in turn drives up the amount of funding required and the proportion of risky investment. This coupling effect of heterogeneity is demonstrated first through analytical statements which we derive under a simplified assumption of no investment constraints. In constrained cases for which analytical solutions are unavailable, we develop a numerical method that finds the heterogeneity-adjusted management decisions using a so-called adaptive representative agent (ARA), whose characterization is given explicitly in a key theorem. Whereas traditional methods often suffer from numerical complexity that grows exponentially with the number of heterogeneous members, the computational cost of the proposed ARA method is only linear in the number of time steps. This advantage of the ARA method and its ability to rectify the coupling effects of heterogeneity are demonstrated through our numerical example. PubDate: 2023-01-07 • Surrender and path-dependent guarantees in variable annuities: integral equation solutions and benchmark methods Abstract: Abstract We investigate the evaluation problem of variable annuities by considering guaranteed minimum maturity benefits, with constant or path-dependent guarantees of up-and-out barrier and lookback type, and guaranteed minimum accumulation benefit riders, with different forms of the surrender amount. We propose to solve the non-standard Volterra integral equations associated with the policy valuations through a randomized trapezoidal quadrature rule combined with an interpolation technique. Such a rule improves the converge rate with respect to the classical trapezoidal quadrature, while the interpolation technique allows us to obtain an efficient algorithm that produces a very accurate approximation of the early exercise boundary. The method accuracy is assessed by constructing two benchmarks: The first one, developed in a lattice framework, is characterized by a novel algorithm for the lookback path-dependent guarantee obtained thanks to the lattice convergence properties, while the application is straightforward in the other cases; the second one is based on the least-squares Monte Carlo simulations. PubDate: 2023-01-02 • Locally-coherent multi-population mortality modelling via neural networks Abstract: Abstract This manuscript proposes an approach for large-scale mortality modelling and forecasting with the assumption of locally-coherence of the mortality forecasts. In general, the coherence prevents diverging long-term mortality forecasts between two or more populations. Despite being considered a desirable property in a multi-population modelling framework, it could be perceived as a strong assumption when a large collection of countries is considered. We propose a neural network model which requires the coherence of the mortality forecasts only within sub-groups of similar populations. The architecture is designed to be easily interpretable and induces the creation of some clusters of countries with similar mortality patterns. This aspect also makes the model an interesting tool for analysing similarities and differences between different countries’ mortality dynamics and identifying opportunities for longevity risk diversification and mitigation. An extensive set of numerical experiments performed using all the available data from the Human Mortality Database shows that our model produces more accurate mortality forecasts with respect to some well-known stochastic mortality models. Furthermore, a massive reduction of the parameters to optimise is achieved with respect to the benchmark mortality models. PubDate: 2022-12-29 • Inverse data envelopment analysis without convexity: double frontiers Abstract: Abstract In this research, inverse data envelopment analysis (IDEA) approaches are proposed to measure inputs changes for output perturbations made while the convexity assumption is relaxed. Actually, inverse free disposal hull (IFDH) techniques under constant returns to scale (CRS) assumption are introduced from two perspectives, optimistic and pessimistic. In models proposed in this study, the efficiency of decision-making units (DMUs) is maintained after adding perturbed DMU with new input and output values. These inverse problems are multiobjective nonlinear that are converted to equivalent linear models and finding all Pareto efficient solutions is discussed. The models have also been tested using a real-world case study from the banking sector. The findings reveal valuable facts concerning the changes of inputs for changes of outputs from optimistic and pessimistic aspects while the convexity axiom is dropped. PubDate: 2022-11-25 • Introduction to the Milestones series PubDate: 2022-11-18 • Bipartite choices Abstract: Abstract This piece in the Milestones series is dedicated to the paper coauthored by David Gale and Lloyd Shapley and published in 1962 under the title “College admissions and the stability of marriage” on the American Mathematical Monthly. PubDate: 2022-11-16 • Optimality and duality in nonsmooth semi-infinite optimization, using a weak constraint qualification Abstract: Abstract Variational analysis, a subject that has been vigorously developing for the past 40 years, has proven itself to be extremely effective at describing nonsmooth phenomenon. The Clarke subdifferential (or generalized gradient) and the limiting subdifferential of a function are the earliest and most widely used constructions of the subject. A key distinction between these two notions is that, in contrast to the limiting subdifferential, the Clarke subdifferential is always convex. From a computational point of view, convexity of the Clarke subdifferential is a great virtue. We consider a nonsmooth multiobjective semi-infinite programming problem with a feasible set defined by inequality constraints. First, we introduce the weak Slater constraint qualification and derive the Karush–Kuhn–Tucker types necessary and sufficient conditions for (weakly, properly) efficient solution of the considered problem. Then, we introduce two duals of Mond–Weir type for the problem and present (weak and strong) duality results for them. All results are given in terms of Clarke subdifferential. PubDate: 2022-11-09 DOI: 10.1007/s10203-022-00375-w • Two representations of information structures and their comparisons Abstract: Abstract This paper compares two representations of informativeness. PubDate: 2022-11-02 DOI: 10.1007/s10203-022-00379-6 • The robustness of the generalized Gini index Abstract: Abstract In this paper, we introduce a map $$\varPhi$$ , which we call zonoid map, from the space of all non-negative, finite Borel measures on $${\mathbb {R}}^n$$ with finite first moment to the space of zonoids of $${\mathbb {R}}^n$$ . This map, connecting Borel measure theory with zonoids theory, allows to slightly generalize the Gini volume introduced, in the context of Industrial Economics, by Dosi (J Ind Econ 4:875–907, 2016). This volume, based on the geometric notion of zonoid, is introduced as a measure of heterogeneity among firms in an industry and it turned out to be a quite interesting index as it is a multidimensional generalization of the well-known and broadly used Gini index. By exploiting the mathematical context offered by our definition, we prove the continuity of the map $$\varPhi$$ which, in turn, allows to prove the validity of a SLLN-type theorem for our generalized Gini index and, hence, for the Gini volume. Both results, the continuity of $$\varPhi$$ and the SLLN theorem, are particularly useful when dealing with a huge amount of multidimensional data. PubDate: 2022-10-25 DOI: 10.1007/s10203-022-00378-7 • Cognitive limits and preferences for information Abstract: Abstract The structure of uncertainty underlying certain decision problems may be so complex as to elude decision makers’ full understanding, curtailing their willingness to pay for payoff-relevant information—a puzzle manifesting itself in, for instance, low stock-market participation rates. I present a decision-theoretic method that enables an analyst to identify decision makers’ information-processing abilities from observing their preferences for information. A decision maker who is capable of understanding only those events that either almost always or almost never happen fails to attach instrumental value to any information source. On the other hand, non-trivial preferences for information allow perfect identification of the decision maker’s technological capacity. PubDate: 2022-10-14 DOI: 10.1007/s10203-022-00376-9 • Utility maximization in a stochastic affine interest rate and CIR risk Abstract: Abstract This paper investigates optimal investment problems in the presence of stochastic interest rates and stochastic volatility under the expected utility maximization criterion. The financial market consists of three assets: a risk-free asset, a risky asset, and zero-coupon bonds (rolling bonds). The short interest rate is assumed to follow an affine diffusion process, which includes the Vasicek and the Cox–Ingersoll–Ross (CIR) models, as special cases. The risk premium of the risky asset depends on a square-root diffusion (CIR) process, while the return rate and volatility coefficient are unspecified and possibly given by non-Markovian processes. This framework embraces the family of the state-of-the-art 4/2 stochastic volatility models and some non-Markovian models, as exceptional examples. The investor aims to maximize the expected utility of the terminal wealth for two types of utility functions, power utility, and logarithmic utility. By adopting a backward stochastic differential equation (BSDE) approach to overcome the potentially non-Markovian framework and solving two BSDEs explicitly, we derive, in closed form, the optimal investment strategies and optimal value functions. Furthermore, explicit solutions to some special cases of our model are provided. Finally, numerical examples illustrate our results under one specific case, the hybrid Vasicek-4/2 model. PubDate: 2022-09-20 DOI: 10.1007/s10203-022-00374-x • Equalizing solutions for bankruptcy problems revisited Abstract: Abstract When solving bankruptcy problems through equalizing solutions, agents with small claims prefer to distribute the estate according to the Constrained Equal Awards solution, while the adoption of the Constrained Equal Losses solution is preferred by agents with high claims. Therefore, the determination of which is the central claimant, as a reference to distinguish the agents with a high claim from those with a low claim, is a relevant question when designing hybrid solutions, or new methods to distribute the available estate in a bankruptcy problem. We explore the relationship between the equal awards parameter $$\lambda$$ and the equal losses parameter $$\mu$$ that characterize the two solutions. We show that the central claimant is fully determined by these parameters. In addition, we explore how to compute these parameters and present optimization problems that provide the Constrained Equal Awards and the Constrained Equal Losses solutions. PubDate: 2022-09-02 DOI: 10.1007/s10203-022-00373-y • Dangerous tangents: an application of $$\Gamma$$ Γ -convergence to the control of dynamical systems Abstract: Abstract Inspired by the classical riot model proposed by Granovetter in 1978, we consider a parametric stochastic dynamical system that describes the collective behavior of a large population of interacting agents. By controlling a parameter, a policy maker seeks to minimize her own disutility, which in turn depends on the steady state of the system. We show that this economically sensible optimization is ill-posed and illustrate a novel way to tackle this practical and formal issue. Our approach is based on the $$\Gamma$$ -convergence of a sequence of mean-regularized instances of the original problem. The corresponding minimum points converge toward a unique value that intuitively is the solution of the original ill-posed problem. Notably, to the best of our knowledge, this is one of the first applications of $$\Gamma$$ -convergence in economics. PubDate: 2022-07-02 DOI: 10.1007/s10203-022-00372-z • Correction to: Semi-analytical prices for lookback and barrier options under the Heston model Abstract: Abstract In this note, we point out a mistake in Theorem 1 of De De Gennaro Aquino and Bernard (Decis Econ Finance 42(2):715–741, 2019) and provide some missing references where the problem of pricing barrier options under the Heston model had previously been discussed. PubDate: 2022-06-01 DOI: 10.1007/s10203-021-00360-9 • Portfolio choice in the model of expected utility with a safety-first component Abstract: Abstract The standard problem of portfolio choice between one risky and one riskless asset is analyzed in the model of expected utility with a safety-first component that is represented by the probability of final wealth exceeding a “safety” wealth level. It finds that a positive expected excess return remains sufficient for investing a positive amount in the risky asset except in the special situation where the safety wealth level coincides with the wealth obtained when the entire initial wealth is invested in the riskless asset. In this situation, the optimal amount invested in the risky asset is zero if the weight on the safety-first component is sufficiently large. Comparative statics analysis reveals that whether the optimal amount invested in the risky asset becomes smaller as the weight on the safety-first component increases depends on whether the safety wealth level is below the wealth obtained when the entire initial wealth is invested in the riskless asset. Further comparative statics analyses with respect to the safety wealth level and the degree of risk aversion in the expected utility component are also conducted. PubDate: 2022-06-01 DOI: 10.1007/s10203-021-00347-6 • Beating the market' A mathematical puzzle for market efficiency PubDate: 2022-06-01 DOI: 10.1007/s10203-021-00361-8 • Option pricing: a yet simpler approach Abstract: Abstract We provide a lean, non-technical exposition on the pricing of path-dependent and European-style derivatives in the Cox–Ross–Rubinstein (CRR) pricing model. The main tool used in this paper for simplifying the reasoning is applying static hedging arguments. In applying the static hedging principle, we consider Arrow–Debreu securities and digital options, or backward random processes. In the last case, the CRR model is extended to an infinite state space which leads to an interesting new phenomenon not present in the classical CRR model. At the end, we discuss the paradox involving the drift parameter $$\mu$$ in the Black–Scholes–Merton model pricing. We provide sensitivity analysis and an approximation of the speed of convergence for the asymptotically vanishing effect of drift in prices. PubDate: 2022-06-01 DOI: 10.1007/s10203-021-00338-7 • A flexible lattice framework for valuing options on assets paying discrete dividends and variable annuities embedding GMWB riders Abstract: Abstract In a market where a stochastic interest rate component characterizes asset dynamics, we propose a flexible lattice framework to evaluate and manage options on equities paying discrete dividends and variable annuities presenting some provisions, like a guaranteed minimum withdrawal benefit. The framework is flexible in that it allows to combine financial and demographic risk, to embed in the contract early exercise features, and to choose the dynamics for interest rates and traded assets. A computational problem arises when each dividend (when valuing an option) or withdrawal (when valuing a variable annuity) is paid, because the lattice lacks its recombining structure. The proposed model overcomes this problem associating with each node of the lattice a set of representative values of the underlying asset (when valuing an option) or of the personal subaccount (when valuing a variable annuity) chosen among all the possible ones realized at that node. Extensive numerical experiments confirm the model accuracy and efficiency. PubDate: 2022-05-27 DOI: 10.1007/s10203-022-00371-0 • Performance measurement with expectiles Abstract: Abstract Financial performance evaluation is intimately linked to risk measurement methodologies. There exists a well-developed literature on axiomatic and operational characterization of measures of performance. Hinged on the duality between coherent risk measures and reward associated with investment strategies, we investigate representation of acceptability indices of performance using expectile-based risk measures that recently attracted a lot of attention inside the financial and actuarial community. We propose two purely expectile-based performance ratios other than the classical gain-loss ratio and the Omega ratio. We complement our analysis with elicitability of expectile-based acceptability indices and their conditional version accounting for new information flow. PubDate: 2022-05-19 DOI: 10.1007/s10203-022-00369-8 JournalTOCs School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: [email protected] Tel: +00 44 (0)131 4513762
proofpile-shard-0030-342
{ "provenance": "003.jsonl.gz:343" }
Nonrecursive Movement Formulas Since arithmetico-geometric sequences have explicit formulas, we can build non-recursive functions to calculate simple but useful results, such as the height of the player on any given tick, or the distance of a jump in terms of the initial speed and duration. Definitions: • ${\textstyle v_0}$ is the player's initial speed (speed on ${\displaystyle t_0}$, before jumping) • ${\textstyle t}$ is the number of ticks considered (ex: t=12 on flat ground, see Jump Duration) • ${\textstyle J}$ is the "jump bonus" (0.3274 for sprintjump, 0.291924 for strafed sprintjump, 0.1 for 45° no-sprint jump...) • ${\textstyle M}$ is the movement multiplier after jumping (1.3 for 45° sprint, 1.274 for normal sprint, 1.0 for no-sprint 45°...) Vertical Movement (jump) [1.8] Vertical speed after jumping (${\displaystyle t \geq 6}$) ${\textstyle \textrm{V}_Y(t) = 4 \times 0.98^{t-5} - 3.92}$ Relative height after jumping (${\displaystyle t \geq 6}$) ${\textstyle \textrm{Y}_{rel}(t) = \underset{\textrm{jump peak}}{\underbrace{197.4 - 217 \times 0.98^5}} + 200 (0.98-0.98^{t-4}) - 3.92 (t-5)}$ For ${\textstyle t<6}$, see below. Vertical Movement (jump) [1.9+] Vertical speed after jumping (${\displaystyle t \geq 1}$) ${\textstyle \textrm{V}_Y(t) = 0.42 \times 0.98^{t-1} + 4 \times 0.98^t - 3.92}$ Relative height after jumping (${\displaystyle t \geq 0}$) ${\textstyle \textrm{Y}_{rel}(t) = 217 \times (1 - 0.98^t) - 3.92 t}$ Horizontal Movement (instant jump) Assuming the player was airborne before jumping. Horizontal speed after sprintjumping (${\displaystyle t \geq 2}$) ${\textstyle \textrm{V}_H(v_0,t) = \frac{0.02 M}{0.09} + 0.6 \times 0.91^t \times \left ( v_0 + \frac{J}{0.91} - \frac{0.02 M}{0.6 \times 0.91 \times 0.09} \right )}$ Sprintjump distance (${\displaystyle t \geq 2}$) ${\textstyle \textrm{Dist}(v_0,t) = 1.91 v_0 + J + \frac{0.02 M}{0.09} (t-2) + \frac{0.6 \times 0.91^2}{0.09} \times (1 - 0.91^{t-2}) \times \left ( v_0 + \frac{J}{0.91} - \frac{0.02 M}{0.6 \times 0.91 \times 0.09} \right )}$ Note: These formulas are accurate for most values of ${\displaystyle v_0}$, but some negative values can wind up activating the speed threshold and reset the player's speed at some point, thus rendering these formulas inaccurate. Horizontal Movement (delayed jump) Assuming the player is on ground before jumping (at least 1 tick since landing). Horizontal speed after sprintjumping (${\displaystyle t \geq 2}$) ${\textstyle \textrm{V}^*_H(v_0,t) = \frac{0.02 M}{0.09} + 0.6 \times 0.91^t \times \left ( 0.6 v_0 + \frac{J}{0.91} - \frac{0.02 M}{0.6 \times 0.91 \times 0.09} \right )}$ Sprintjump distance (${\displaystyle t \geq 2}$) ${\textstyle \textrm{Dist}^*(v_0,t) = 1.546 v_0 + J + \frac{0.02 M}{0.09} (t-2) + \frac{0.6 \times 0.91^2}{0.09} \times (1 - 0.91^{t-2}) \times \left ( 0.6v_0 + \frac{J}{0.91} - \frac{0.02 M}{0.6 \times 0.91 \times 0.09} \right )}$ Horizontal speed after ${\displaystyle n}$ consecutive sprintjumps on a momentum of period ${\textstyle T}$ (${\displaystyle n \geq 0}$, ${\displaystyle T \geq 2}$). ${\textstyle \textrm{V}^{\,n}_H(v_0,T,n) = \left ( 0.6 \times 0.91^T \right )^n v_0 + \left ( 0.6 \times 0.91^{T-1} J + 0.02M \frac{1 - 0.91^{T-1}}{0.09} \right ) \frac{1- (0.6 \times 0.91^T)^n}{1 - 0.6 \times 0.91^T} }$ If the first sprintjump is delayed, multiply ${\textstyle v_0}$ by 0.6
proofpile-shard-0030-343
{ "provenance": "003.jsonl.gz:344" }
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Intravital imaging of glioma border morphology reveals distinctive cellular dynamics and contribution to tumor cell invasion ## Abstract The pathogenesis of glioblastoma (GBM) is characterized by highly invasive behavior allowing dissemination and progression. A conclusive image of the invasive process is not available. The aim of this work was to study invasion dynamics in GBM using an innovative in vivo imaging approach. Primary brain tumor initiating cell lines from IDH-wild type GBM stably expressing H2B-Dendra2 were implanted orthotopically in the brains of SCID mice. Using high-resolution time-lapse intravital imaging, tumor cell migration in the tumor core, border and invasive front was recorded. Tumor cell dynamics at different border configurations were analyzed and multivariate linear modelling of tumor cell spreading was performed. We found tumor border configurations, recapitulating human tumor border morphologies. Not only tumor borders but also the tumor core was composed of highly dynamic cells, with no clear correlation to the ability to spread into the brain. Two types of border configurations contributed to tumor cell spreading through distinct invasion patterns: an invasive margin that executes slow but directed invasion, and a diffuse infiltration margin with fast but less directed movement. By providing a more detailed view on glioma invasion patterns, our study may improve accuracy of prognosis and serve as a basis for personalized therapeutic approaches. ## Introduction Glioblastoma (GBM) is one of the most aggressive primary brain tumors, with a median survival time of about 14.6 months despite maximal therapy1. Besides resection and radiotherapy, Temozolomide, a cytotoxic drug2 and Optune, so-called Tumor Treating Fields3,4, remain the only measures that improve outcome. GBM is hallmarked by a high complexity and heterogeneity5,6, making a deep understanding of its pathogenesis challenging. The tumor is driven by a minority of cancer stem-like brain tumor initiating cells (BTIC)7,8, that appear to be not only implicated in tumor initiation, but also in recurrence, progression9,10 and resistance to current therapy8,11. BTICs and non-stem tumor cell co-exists in vivo and are likely to change dynamically depending of the tumor microenvironment12,13. In view of modelling the disease, BTICs are the best available cell population to investigate GBM in vitro and in vivo14,15. The pathogenesis of GBM is manifold and includes highly-invasive behavior that is a main cause behind GBM dissemination and progression16,17. There is general agreement that invasion is an early event in GBM progression16, where the tumor cells tend to invade individually or in small groups17,18. These infiltrating tumor cells lie well beyond the definable margin for maximal resections17,18 giving rise to tumor relapse19. GBM cells are thought to preferentially migrate along existing brain structures12,18, however vascular co-option and migration along white matter tracts are also important features of glioma cell invasion16,20,21. Tumor borders are not always uniform, and several invasion patterns such as single cell invasion18,22 and collective migration with leading and follower cells23,24 have been described. However it is unclear how these patterns correlate to clinically relevant macroscopic tumor growth and dissemination18,25. The reason why a conclusive image of the invasive process in malignant gliomas is still not available is partly due to the lack of relevant models. Screens based on partitioned resections from tumor boarder and core26,27 and conventional in vitro migration assays28,29,30 are highly artificial and cannot recapitulate in vivo tumor cell behavior. The development of intravital microcopy (IVM), a potent tool that allows to perform single-cell resolution time-lapse imaging on live animals, has provided new insights into (GBM) tumor cell dynamics22,31,32,33,34,35,36,37,38,39. To further investigate the physiological processes40 underlying GBM cell movement, this study aimed to image and analyze distinct GBM invasive growth patterns found in vivo, similar to those observed in patients. We combined an orthotopic human BTIC-derived GBM model with real time dynamic high-resolution IVM22,31,35 followed by a comprehensive analysis of tumor cell migratory behavior and found that distinct types of tumor border morphologies present different invasive growth patterns that can contribute to tumor expansive growth. ## Results ### Distinct tumor border configurations in glioma tumors To gain insight in glioma cells migratory behavior at different tumor border configurations we imaged the in vivo behavior of single BTICs derived from GBM patients who had undergone resection15,41. We injected two BTIC cell lines (BTIC-10 and BTIC-12) stably expressing a nuclear fluorescent protein (H2B Dendra2) in the brain of NSG mice. To gain visual access to the brain and study the invasive behavior at single cell level in vivo, we implanted a chronic cranial imaging window (CIW) around the injection site35. Upon tumor development, a series of microscopic time-lapse z-stack images of all the visible tumor volume were acquired through the CIW at multiple time points with a minimum time interval of 45 minutes (Fig. 1a). Tile-scan images revealed distinct tumor border configurations (Fig. 1b). Three different patterns of invasion were observed: protruding multicellular groups originating at the interphase between the tumor and the brain parenchyma were defined as “invasive margin” (Fig. 1b). Tumor margins showing no protrusions were named “well-defined tumor border” (Fig. 1b). Individual cell migration into the invasive area of the brain parenchyma was defined as “diffuse cell infiltration” (Fig. 1b). Similar types of invasion patterns can also be found in human glioblastoma biopsy samples (Supplementary Fig. S1). For comparison, cell behavior in the tumor core was evaluated (Fig. 1b). Both BTIC cells lines showed the described border configurations (Supplementary Fig. S2). The movement of individual tumor cells in distinct tumor border configurations was determined by tracking the migration path over time in 3D reconstructed time-lapse movies (Fig. 2a). Information about migration velocity, speed, persistence, and directionality was extracted from the tracks. Although there was variation in terms of cell velocity between the different mice, the relative migratory behavior between the different border configurations was consistent among them (Supplementary Fig. S2). When we performed a mixed-effects regression of tumor cell migration away from the tumor border we found that it was uncorrelated to the type of BTIC (Suppl. Table 1). Thus, we excluded that the type of BTIC had an impact on the migratory behavior and describe pooled data of both BTIC lines in further analysis. ### Role of spatial cell arrangements in migratory behavior within the invasive margin Next, we aimed to understand what drives cell migration at the invasive margin. We hypothesized that spatial cellular arrangements at the invasive margin define the migration direction of subsequently following cells, as previously described23. Within each invasive margin position, we measured the direction correlation between cells leading invasion and their followers (Fig. 3a). We did not find clear correlations between the direction of movement of invasion leading cells and those following (Fig. 2b). To test the hypothesis that these data point towards a predominant role of the microenvironment to determine direction and against an individual genetic program of a subtype of cells, we re-evaluated our IVM movies and observed that cells spatially rearrange within the invasive margin, with leader cells becoming followers and vice versa (Supplementary Movie 1). To further analyze spatial rearrangements, we compared the proportion of leader and follower cells moving towards and away from the tumor. We found that cells located at the leader position more often moved away from the tumor core (74%) than towards it, compared to follower cells that moved in both directions (Fig. 3c), pointing to the hypothesis that they were most strongly disposed to a microenvironmental gradient that triggers invasion42,43. While follower cells showed distinct velocities depending on the direction they moved, with a higher velocity when moving away from the tumor core, leader cells did not show such behavior (Fig. 3d). Next, we analyzed invasion related to anatomical structures within the brain. We observed that BTIC use different routes of invasion along white-matter tracts or blood vessels (Supplementary Fig. 3). We further analyzed and compared the behavior of cells that used both routes of migration and found that in both cases the direction of leader cell invasion did not correlate to the direction taken by follower cells (Supplementary Fig. 4a,b,e). Perivascular leader cells had a higher tendency to move away from the tumor core than leader cells within the parenchyma (Supplementary Fig. 4c). In addition, perivascular leader cells moved faster than intraparenchymal leader cells (Supplementary Fig. 4d). In summary, these results indicate that leader cells migrate away from the tumor core more often and faster when associated to blood vessels. ### Cell dynamics at the tumor core To get a more holistic picture of BTIC fates within our model, we next analyzed cell migration in the non-invasive parts of the tumor (Fig. 2a). In contrast to the common believe that cells within the tumor core are static22, we observed that from all cancer cells in the tumor core, 23% was motile (Fig. 2c) and moved with an average speed of 8 μm/hour (Fig. 2d), a speed even higher than of the cells of the invasive front. ### Cell dynamics at the tumor border We next analyzed the behavior of cells within the different types of tumor borders. Here, we determined the mean cell velocity of tracked single cells (cell displacement over time) and found that cells, at the well-defined border (4.4 µm/hour) and the diffuse margin (5.2 µm/hour), possessed a higher velocity than cells from the invasive margin (2.6 µm/hour) (Fig. 2b). Next, we compared the percentage of motile cells with a cell velocity > 2 µm/hour in between the tumor core and all kinds of borders and observed a higher proportion of motile cells in all border configurations as compared to the tumor core (Fig. 2c), pointing to a general more migratory behavior of tumor borders. The speed of motile cells (measure of the actual distance a cell covers over time) was highest at the diffuse margin and the well-defined border with 11.6 µm/hour and 12 µm/hour respectively (Fig. 2d). Cells in the invasive margin were found to be moving more slowly with an average speed of 6.8 µm/hour. However, when we analyzed the persistence of movement, we discovered that cells from the invasive margin moved in the most persistent fashion of higher than 0.36, a number that was previously considered to be representative for random walk44. Since directionality is an important feature of invasion, we analyzed the directionality patterns in each type of border configuration. We observed disperse directionality patterns in all subtypes (Supplementary Fig. S5), with cells migrating towards the parenchyma, while other cells migrated towards the tumor core, or parallel to the tumor border (Fig. 2a, Supplementary Movie 1). Moreover, we observed cells that changed migration directionality over time (Fig. 2a, Supplementary Movie 1). To assess whether a particular migration behavior is favored within one of the border configurations, we plotted the migration path of all the cells from different positions in a wind rose plot (Fig. 4a). Here, we found the most directed and invasive pattern in the invasive margin configuration. To identify the velocity and proportion of phenotypically relevant invading and retreating cells, we measured the perpendicular cell displacement relative to the tumor margin (Fig. 4b). We found that the proportion of invasion was balanced at all border configurations with only a slight prevalence of invading cells (Fig. 4c). The diffuse margin harbored the highest proportion of invading cells (61.5%), followed by the invasive margin (54.4%) (Fig. 4c). Moreover, when we compared the velocity of the invading and retreating cells we found that invading cells moved faster than retreating cells at both the invasive margin and the diffuse margin, but not at the well-defined border (Fig. 4d). Finally, to depict if all areas within a specific border configuration follow the same phenotypic behavior, we assessed the overall displacement of the center of mass perpendicular to the tumor margin (COMy) (Fig. 4e). In all border configurations, we found positions moving towards and away from the tumor border (Fig. 4e), indicating that not all the regions of a tumor are in continuous expansion. Our results showed that for the well-defined border configuration the mean COMy was close to zero, indicating that this type of configuration, although very dynamic within the position (Fig. 2a), does not contribute to cell spreading away from the tumor. However, both the diffuse and invasive margin showed a mean COMy shifted towards the brain parenchyma (Fig. 4e), emphasizing that these configurations may drive tumor cell spreading away from the tumor. To further confirm this observation, we used a multivariate linear model to more accurately evaluate the association of COMy with factors such as mean velocity of invading cells, mean velocity of retreating cells and their interaction with variables such as frequency of invading cells, configuration type and BTIC type (Suppl. Table 2). Our model shows that for each position the velocity of invading cells was the strongest predictor of tumor expansion (COMy shift towards the brain parenchyma) and was independent of the BTIC cell line. The velocity of invading cells showed interaction effects with two other predictors (Supplementary Fig. 6), namely frequency of invading cells and border configuration type. We consolidated variation to these predictors: with a high proportion of invading cells, the velocity of invading cell has a higher impact on tumor expansion (COMy); the velocity of invading cells has a higher impact on cell spreading and at the invasive margin and at the diffuse border (Supplementary Fig. 6a). Since our analysis showed that cells at the invasive margin have a lower cell velocity (Figs 2b and 4d), this factor alone could not explain a similar effect on cell spreading at the invasive margin and at the diffuse border as shown in Fig. 4e. Therefore, we analyzed the distribution of the frequency with which cells were invading within each position for each configuration type and found that the invasive margin showed more positions with a high frequency of invading cells compared to the diffuse margin (Supplementary Fig. 6b). Combined, these data indicate that the diffuse margin and invasive margin contribute to cell spreading and tumor expansion through two different mechanisms: a higher speed for the first, and a higher proportion of invading cells for the second. ## Discussion In contrast to glioblastoma morphology, tumor cell dynamics within GBM is not well understood. We combined an orthotopic GBM model, time-lapse in vivo microscopy of human brain tumor initiating cells through a cranial window, and in-depth analysis of tumor cell spreading to acquire a better insight of GBM tumor cells dynamics within their environment on multiple layers. Using this approach, we delineated tumor border configurations conformed by two different types of invasion represented by cells in the invasive margin and diffuse infiltration pattern; and one pattern that, although dynamic, seems to be non-invasive, the well-defined tumor border pattern. We discovered that, in contrast to the other patterns45, cells from the invasive margin moved with lower velocity and speed, but in the most persistent fashion. At the diffuse margin cells moved less persistently but also contributed to cell spreading and tumor expansion through a higher speed of migration. Moreover, we found that not all regions that show tumor cell migration actually contribute to tumor cell invasion. GBM cells in the tumor well defined border, a region generally thought to be static22, are also extremely dynamic, but do not contribute to invasion as their movement shows a highly undirected pattern. This highlights the importance of this study, indicating that a more comprehensive analysis of GBM including morphological and dynamic data may allow deeper insights in the complex mechanism of invasion. In contrast, studies that looked solely on tumor cell migration speed, or were performed in vitro, models where tumor cells lack microenvironmental cues for directed migration, do not fully elucidate this heterogeneity30,46. Glioma cells have previously been reported to use different routes of invasion such as intraparenchymal invasion along white-matter tracks16,21 or invasion along blood vessels16. In line with this, in our model, perivascular leader cells had also a higher tendency to move away from the tumor core than cells within the parenchyma, and to move faster. These data correspond well to published data, where individually migrating cells, collective strands extending along blood vessels or white matter tracts, and multicellular networks of interconnected glioma cells were found22,45,47,48,49. However, this is the first piece of work that analyzed single-cell dynamics of patient derived tumor cells in vivo at different tumor border and invasive front areas corresponding to the morphologies found in patients. It has been previously described that collectively moving cells can either have a specific role defined by their position in the cellular stream, or dynamically change position with any cell from the group with the ability to drive migration23. Against our expectation, we did not find clear correlations between the direction of movement of invasion leading cells and those following. Our results show that in our model tumor cells can dynamically exchange position in the collective cell stream indicating that their behavior may be more dependent of microenvironmental influences than of genetic predisposition. Cells located at the leader position more often moved away from the tumor core than towards it, compared to follower cells that moved in both directions. This possibly points to the hypothesis that leader cells were most strongly disposed to a microenvironmental gradient that triggers invasion50,51,52, probably due to distinct microenvironmental structures45,53 or gradients of chemoattractants50. Moreover, since we found migratory cells in all areas of the tumor, it is likely that all cells have an intrinsic capacity to migrate, however only some actually contribute to invasion due to microenvironmental cues. The fact that tumor cells move bidirectionally indicates that tumor cells may receive chemotactic signals from both the tumor core and the tumor microenvironment. Indeed, studies based on brain slices engrafted with tumor spheres showed a cell fraction with a highly invasive morphology that interacted with the tumor core54,55,56. In addition, ex vivo and in vivo studies using IVM have also shown that tumor cells that are integrated into functional connective networks respond to tumor core removal or injury by repopulating the injured area54,57. We have also found that tumor biopsy-like injury changes tumor cell migratory capacity and increases tumor cell migration speed31, all contributing to the hypothesis that invasion is highly dependent on the environment. Another potential driver of glioma tumor cell migration is the direct movement of stromal cells. Although there is no evidence so far that that brain parenchymal cells can directly drive tumor cell migration, in squamous cell carcinoma, fibroblasts have shown to lead collective migration of tumor cells58. Moreover immune cells, such as macrophages, have shown to associate with breast tumor cells, drive their migration towards blood vessels and facilitate their intravasation59. The possibility of parenchymal brain cells such as e.g. microglia as direct drivers of glioma cell migration still needs to be explored. In summary our work is the first analytical study that correlates distinct tumor border patterns that can be found on histological sections with particular tumor cell dynamics. It not only fosters the understanding of single cell invasion, but possibly also of distant metastasis and re-population of the primary tumor site, both mechanisms that significantly influence the prognosis of patients with GBM. Our long-term goal is to correlate the analysis of tumor morphology with the dynamics of tumor cell movement to get a more complete picture of GBM invasion and to therefore improve the prognostic evaluation of GBM and to possibly develop novel therapeutic approaches. Future work should be directed to analyze invasive behavior of a much larger cohort of distinct patient-derived tumor cell lines, in order to confirm our results and to allow correlations with morphological and possibly prognostic features. Since it is known that the adaptive immune system can also influence tumor migratory behavior18,60, it might be worth to use humanized tumor mouse models to assess the role of the human immune microenvironment on tumor cell behavior. ## Materials and Methods ### Tumor cell lines We used previously established primary brain tumor initiating cell lines (BTIC)-10 and -12 derived from patients with IDH wild type glioblastoma as described15,41. The Department of Neuropathology, University of Regensburg (MJR), verified the patients’ diagnoses and WHO grade. Tumor cells were maintained in RHB-A (Y40001, Takara), supplemented with 20 ng/ml of EGF (130097751), bFGF (130093842) (both Miltenyi Biotech), and 50 U (v/v) Penicillin/0.05% (v/v) Streptomycin (P4333) (Sigma-Aldrich) at 37 °C, 5% CO2, 95% humidity in a standard tissue culture incubator. Progenitor features of BTICs were verified by clonogenicity assays, flow cytometry (CD133, CD15, CD44, A2B5), immunocytochemistry (Nestin, Sox2, GFAP), and tumor take in an immunocompromised mouse model (female NOD.Cg-Prkdcscid Il2rgtm1Wjl/SzJ). BTICs were lentivirally transduced to achieve stable expression of the nuclear fluorescent protein H2B Dendra2, as described61. The ethics board of the University of Regensburg, Germany, approved the use of human material for this study (No° 11-103-0182) and all patients gave written informed consent. All methods were performed in accordance with the relevant guidelines and regulations. ### Animals Non-obese diabetic SCID IL-2 receptor gamma chain knockout (NSG) mice 8 to 12 weeks post partum were used for the experiments. Mice were housed in an individually ventilated cage and received food and water ad libitum. All experiments were carried out in accordance with the guidelines of the Animal Welfare Committee of the Royal Netherlands Academy of Arts and Sciences, the Netherlands. The experimental protocols used in this manuscript were approved by the Centrale Commissie Dierproeven (CCD) and the Instantie voor Dierenwelzijn (IvD). ### Cranial imaging window (CIW) surgery and tumor cell injection CIW surgery and tumor cell injection were performed at the same day as described31,62. Briefly, mice were sedated with Hypnorm (Fluanison [neuroleptic] + Fentanyl [opioid]) (0.4 ml/kg) + Midazolam [benzodiazepine sedative] (2 mg/kg) at a dose of 1:1:2 in sterile water and mounted in a stereotactic frame. The head was secured using a nose clamp and two ear bars. The head was shaved and the skin was cut in a circular manner. The mouse was placed under a stereo-microscope to ensure precise surgical manipulation. The periosteum was scraped and a circular groove of 5 mm diameter was drilled over the right parietal bone. The bone flap was lifted under a drop of cortex buffer (125 mM NaCl, 5 mM KCl, 10 mM glucose, 10 mM HEPES buffer, 2 mM MgSO4 and 2 mM CaCl2, pH 7.4) and the dura mater was removed. Gelfoam sponge (Pfizer) was used to stop bleeding. (Supplementary Fig. 7). Next, 1 × 105 BTIC H2B-Dendra2 cells suspended in 3 μl of PBS were injected stereotactically using a 10 μl Hamilton syringe with a 2 pt style in the middle of the craniotomy at a depth of 0.5 mm. The exposed brain was sealed with silicone oil and a 6 mm coverslip glued on top. Dental acrylic cement (Vertex) was applied on the skull surface, covering the edge of the coverslip and a stainless steel ring was glued around the coverslip to provide fixation to the microscope. A single dose of 100 μg/kg of buprenorphine (Temgesic, BD pharmaceutical limited) was administered. Mice were closely monitored twice per week for behavior, reactivity and appearance. ### Intravital imaging Mice were imaged as previously described31. In short, mice were sedated and placed face-up in a custom-designed imaging box. Time-lapse images of the entire tumor volume were acquired for a maximum of 13 hours. The minimal time interval between serial images was set to 45 minutes. For tile-scans, images of the complete z-stack of the tumor were acquired to a depth of 300 μm, with a step size of 3 µm (typically 70–100 images in each z-stack). In a group of 4 mice, blood vessels were imaged by intravenous injection of 70 kDa Dextran–Texas Red (Invitrogen Life Technologies) or 2.000.000 kDa Dextran-Rhodamine (Thermofisher scientific). Imaging was performed on an inverted Leica SP8 multiphoton microscope with a chameleon Vision-S (Coherent Inc., Santa Clare, CA, www.coherent.com), equipped with a 25 x water objective (HCX IRAPO NA0.95 WD 2.5 mm) with four HyD detectors: HyD1 (560–650 nm), HyD2 (500–550 nm), HyD3 (455–490 nm), and HyD4 (<455 nm). Images were acquired at a wavelength of 960 nm, where H2B Dendra2 as well as both Dextran reagents showed sufficient signal intensity. Collagen (second harmonic generation) was detected using HyD4, H2B-Dendra2 was detected with HyD2 and Dextran reagents were detected with HyD1. Scanning was performed in a bidirectional mode at 400 Hz and 12 bit, with a zoom of 1 × , 512 × 512 pixels. ### Image processing and analysis For 3D visualization, shift correction, rendering and data analysis time-lapse movies were processed with Imaris (Bitplane, Switzerland). The Spot Analysis module was used for semi-automated tracking of cell motility in three dimensions and for shift correction. Data containing the coordinates of each cell, the values of cell direction, speed (per time unit) and velocity (the vector of movement) was exported and processed and plotted in GraphPad or R. As the animal’s pulse and breathing can cause motion artifacts, we decided that only cells with a velocity (displacement) more than 2.0 µm/hour were classified as motile. Regions with distinct patterns of invasion were defined as described above by visual assessment of the first frame of the movies. Only regions with a clear invasive pattern were included in the analysis. The windrose plot representing cell direction and speed was created using R package “openair”63. To measure the individual cell and center of mass displacement perpendicular to the tumor border the ‘Chemotaxis and Migration Tool’ (Ibidi GmbH) was used64. The results of Imaris tracking were converted in 2D along the x and y axes and directly imported into the ‘Chemotaxis and Migration Tool’ software tool. The cell trajectories were all extrapolated to (x, y) = 0 at time 0 h (Fig. 2a) to visualize the trajectories of each cell with a common origin. Next the cell trajectories were rotated for each position using “Data rotation” tab of the software in order to get the tumor margin parallel to the x axis (as in Fig. 4b). The tumor margin was defined as the tangent drawn to the main tumor mass. For each cell trajectory the angle of cell displacement vector in respect to the x axes (tumor border) was extracted using the ‘Chemotaxis and Migration Tool’. Finally individual cell displacement perpendicular to the tumor border was measured as: $${Y}_{axesdisplacement}=Cell\,velocity\times \,\sin (angl{e}_{relativetotumorboder})$$ The spatial average of all cell positions was used to measure the center of mass displacement perpendicular to the tumor border. For each position the difference in the center of mass along the Y axes between its initial value and that at the end of the experiment was measured by means of the ‘Chemotaxis and Migration Tool’. This measure represented per unit of time was termed the displacement of center of mass (COMy), where n is the number of cells per individual position: $${M}_{start}=\frac{1}{n}{\sum }_{i=1}^{n}({y}_{i,start})=(0);{M}_{end}=\frac{1}{n}{\sum }_{i=1}^{n}({y}_{i,end})$$ $$COMy=\frac{{M}_{end}-{M}_{start}\,}{hour}$$ 3D tile scan projections were used for illustration of different border configurations. For illustration of tumor cell migration, 3D images were tracked manually with an ImageJ plugin (“MTrackJ” Rasband, W.S., ImageJ, U.S. NIH, Bethesda, Maryland, USA). The identities of leader and follower cells were defined at the first frame of the time-lapse and were maintained through all the movie. The first cells protruding from an invasive margin morphology were defined as “leader”. All the consequent cells from the invasive margin were defined as “follower”. The direction correlation between leader and follower cells of the invasive margin was calculated as the cosine of the angle of the leader cell and all the follower cell paths (Fig. 3a). Values close to 1 indicate correlation of direction, while values close to −1 indicate opposite direction correlation. As i.v. injected Dextran reagents leak out of the vessels in the course of time, blood vessel structure from the first time point was applied to all time points for illustration of the blood vessels in the time-lapse stills. ### Patient glioblastoma samples Archives of the Department of Neuropathology, University Hospital Regensburg, were reviewed for glioblastoma cases sampled with parts of the infiltrative rim and representing a spectrum of infiltration patterns observed in the mouse glioblastoma window model. Immunohistochemical staining was performed in context of the routine diagnostic work-up of the samples following a standard protocol65. Antibodies used were as follows: GFAP (Clone 6F2), Dako #M0761, 1:200 dilution; p53(Clone BP53-12) Santa Cruz Biotechnology #SC-263, 1:2000 dilution; Ki-67(Clone MIB-1) Dako #M7240, 1:200 dilution. ### Statistical analysis For all normally distributed measurements, the Student’s t test (comparison of two mean values) or one-way ANOVA (when >2 means were compared) were used to determine significance, set to p < 0.05. Post-hoc tests were performed with p values < 0.05. All p values were two-tailed. Levels of significance were set as follows: nsp > 0.05, *0.05 ≤ p > 0.01, **0.01 ≤ p > 0.001, ***0.001 ≤ p > 0.0001, ****p ≤ 0.0001. Error bars are presented as ± S.E.M. All statistical analysis were performed using GraphPad Prism software (version 6, GraphPad Software, USA). ## Data Availability The authors declare that all data supporting the findings of this study are available within the article and its Supplementary Information files, or from the corresponding author (MA) upon request. ## References 1. Omuro, A. & DeAngelis, L. M. Glioblastoma and Other Malignant Gliomas: A Clinical Review. JAMA 310, 1842–1850 (2013). 2. Stupp, R. et al. Effects of radiotherapy with concomitant and adjuvant temozolomide versus radiotherapy alone on survival in glioblastoma in a randomised phase III study: 5-year analysis of the EORTC-NCIC trial. Lancet Oncol. 10, 459–466 (2009). 3. Stupp, R. et al. Maintenance Therapy With Tumor-Treating Fields Plus Temozolomide vs Temozolomide Alone for Glioblastoma. Jama 314, 2535 (2015). 4. Stupp, R. et al. Effect of tumor-treating fields plus maintenance temozolomide vs maintenance temozolomide alone on survival in patients with glioblastoma a randomized clinical trial. JAMA - J. Am. Med. Assoc. 318, 2306–2316 (2017). 5. Verhaak, R. G. W. et al. Integrated genomic analysis identifies clinically relevant subtypes of glioblastoma characterized by abnormalities in PDGFRA, IDH1, EGFR, and NF1. Cancer Cell 17, 98–110 (2010). 6. Noushmehr, H. et al. Identification of a CpG island methylator phenotype that defines a distinct subgroup of glioma. Cancer Cell 17, 510–22 (2010). 7. Singh, S. K. et al. Identification of a cancer stem cell in human brain tumors. Cancer Res. 63, 5821–5828 (2003). 8. Galli, R. et al. Isolation and characterization of tumorigenic, stem-like neural precursors from human glioblastoma. Cancer Res. 64, 7011–21 (2004). 9. Das, S., Srikanth, M. & Kessler, Ja Cancer stem cells and glioma. Nat. Clin. Pract. Neurol. 4, 427–35 (2008). 10. Sundar, S. J., Hsieh, J. K., Manjila, S., Lathia, J. D. & Sloan, A. The role of cancer stem cells in glioblastoma. Neurosurg. Focus 37, E6 (2014). 11. Singh, S. K. et al. Identification of human brain tumour initiating cells. Nature 432, 396–401 (2004). 12. Cuddapah, V. A., Robel, S., Watkins, S. & Sontheimer, H. A neurocentric perspective on glioma invasion. Nat. Rev. Neurosci. 15, 455–65 (2014). 13. Medema, J. P. Cancer stem cells: the challenges ahead. Nat. Cell Biol. 15, 338–44 (2013). 14. Vescovi, A. L., Galli, R. & Reynolds, Ba Brain tumour stem cells. Nat. Rev. Cancer 6, 425–36 (2006). 15. Moeckel, S. et al. Response-predictive gene expression profiling of glioma progenitor cells in vitro. PLoS One 9, e108632 (2014). 16. Louis, D. N. Molecular pathology of malignant gliomas. Annu. Rev. Pathol. 1, 97–117 (2006). 17. Sahm, F. et al. Addressing Diffuse Glioma as a Systemic Brain Disease With Single-Cell Analysis. Arch Neurol 69, 523–526 (2012). 18. Claes, A., Idema, A. J. & Wesseling, P. Diffuse glioma growth: a guerilla war. Acta Neuropathol. 114, 443–58 (2007). 19. Kim, J. et al. Spatiotemporal Evolution of the Primary Glioblastoma Genome. Cancer Cell 28, 318–28 (2015). 20. Bellail, A. C., Hunter, S. B., Brat, D. J., Tan, C. & Van Meir, E. G. Microregional extracellular matrix heterogeneity in brain modulates glioma cell invasion. Int. J. Biochem. Cell Biol. 36, 1046–69 (2004). 21. Demuth, T. & Berens, M. E. Molecular mechanisms of glioma cell migration and invasion. J. Neurooncol. 70, 217–28 (2004). 22. Winkler, F. et al. Imaging glioma cell invasion in vivo reveals mechanisms of dissemination and peritumoral angiogenesis. Glia 57, 1306–1315 (2009). 23. Rørth, P. Fellow travellers: emergent properties of collective cell migration. EMBO Rep. 13, 984–991 (2012). 24. Haeger, A., Krause, M., Wolf, K. & Friedl, P. Cell jamming: Collective invasion of mesenchymal tumor cells imposed by tissue confinement. Biochim. Biophys. Acta - Gen. Subj. 1840, 2386–2395 (2014). 25. Iwadate, Y. Epithelial-mesenchymal transition in glioblastoma progression (Review). Oncol. Lett. 38, 739–40 (2016). 26. Hoelzinger, D. B. et al. Gene expression profile of glioblastoma multiforme invasive phenotype points to new therapeutic targets. Neoplasia 7, 7–16 (2005). 27. Nevo, I. et al. Identification of molecular pathways facilitating Glioma cell invasion in situ. PLoS One 9 (2014). 28. Bonneh-Barkay, D. & Wiley, C. A. Brain extracellular matrix in neurodegeneration. Brain Pathol. 19, 573–85 (2009). 29. Lau, L. W., Cua, R., Keough, M. B., Haylock-Jacobs, S. & Yong, V. W. Pathophysiology of the brain extracellular matrix: a new target for remyelination. Nat. Rev. Neurosci. 14, 722–729 (2013). 30. Valster, A. et al. Cell migration and invasion assays. Methods 37, 208–15 (2005). 31. Alieva, M. et al. Preventing inflammation inhibits biopsy-mediated changes in tumor cell behavior. Sci. Rep. 7, 7529 (2017). 32. Zomer, A. et al. In Vivo Imaging Reveals Extracellular Vesicle-Mediated Phenocopying of Metastatic Behavior. Cell 161, 1046–1057 (2015). 33. Beerling, E. et al. Plasticity between Epithelial and Mesenchymal States Unlinks EMT from Metastasis-Enhancing Stem Cell Capacity. Cell Rep. 2281–2288, https://doi.org/10.1016/j.celrep.2016.02.034 (2016). 34. Osswald, M. et al. Impact of blood-brain barrier integrity on tumor growth and therapy response in brain metastases. Clin. Cancer Res. 22, 6078–6087 (2016). 35. Alieva, M., Ritsma, L., Giedt, R. J., Weissleder, R. & van Rheenen, J. Imaging windows for long-term. intravital imaging. IntraVital 3, e29917 (2014). 36. Patsialou, A. et al. Intravital multiphoton imaging reveals multicellular streaming as a crucial component of in vivo cell migration in human breast tumors Antonia. IntraVital 2, 1–29 (2014). 37. Entenberg, D. et al. A permanent window for the murine lung enables high- resolution imaging of cancer metastasis. 15, 73–80 (2018). 38. Orth, J. D. et al. Analysis of mitosis and antimitotic drug responses in tumors by In Vivo microscopy and single-cell pharmacodynamics. Cancer Res. 71, 4608–4616 (2011). 39. Miller, M. A. & Weissleder, R. Imaging the pharmacology of nanomaterials by intravital microscopy: Toward understanding their biological behavior. Adv. Drug Deliv. Rev. 113, 61–86 (2017). 40. Suijkerbuijk, S. J. E. & van Rheenen, J. From good to bad: Intravital imaging of the hijack of physiological processes by cancer cells. Dev. Biol. 1–10, https://doi.org/10.1016/j.ydbio.2017.04.015 (2017). 41. Leidgens, V. et al. Stattic and metformin inhibit brain tumor initiating cells by reducing STAT3-phosphorylation. Oncotarget 3, 8250–8263 (2017). 42. Quail, D. F. & Joyce, J. A. The Microenvironmental Landscape of Brain Tumors. Cancer Cell 31, 326–341 (2017). 43. Gu, L. & Mooney, D. J. Biomaterials and emerging anticancer therapeutics: engineering the microenvironment. Nat. Rev. Cancer 16, 56–66 (2015). 44. Pegtel, D. M. et al. The Par-Tiam1 Complex Controls Persistent Migration by Stabilizing Microtubule-Dependent Front-Rear Polarity. Curr. Biol. 17, 1623–1634 (2007). 45. Friedl, P. & Alexander, S. Cancer invasion and the microenvironment: plasticity and reciprocity. Cell 147, 992–1009 (2011). 46. Rao, S. S. et al. Mimicking white matter tract topography using core-shell electrospun nanofibers to examine migration of malignant brain tumors. Biomaterials 34, 5181–5190 (2013). 47. Cheung, K. J., Gabrielson, E., Werb, Z. & Ewald, A. J. Collective invasion in breast cancer requires a conserved Basal epithelial program. Cell 155, 1639–51 (2013). 48. Gritsenko, P., Leenders, W. & Friedl, P. Recapitulating in vivo-like plasticity of glioma cell invasion along blood vessels and in astrocyte-rich stroma. Histochem. Cell Biol. 148, 395–406 (2017). 49. Osswald, M. et al. Brain tumour cells interconnect to a functional and resistant network. Nature 528, 93–98 (2015). 50. Hoelzinger, D. B., Demuth, T. & Berens, M. E. Autocrine factors that sustain glioma invasion and paracrine biology in the brain microenvironment. J. Natl. Cancer Inst. 99, 1583–1593 (2007). 51. Scarpa, E. & Mayor, R. Collective cell migration in development. J. Cell Biol. 212, 143–155 (2016). 52. Sahai, E. Mechanisms of cancer cell invasion. Curr. Opin. Genet. Dev. 15, 87–96 (2005). 53. Brock, A., Krause, S. & Ingber, D. E. Control of cancer formation by intrinsic genetic noise and microenvironmental cues. Nat. Rev. Cancer 15, 499–509 (2015). 54. Fayzullin, A. et al. Time-lapse phenotyping of invasive glioma cells ex vivo reveals subtype-specific movement patterns guided by tumor core signaling. Exp. Cell Res. 349, 199–213 (2016). 55. Parker, J. J., Lizarraga, M., Waziri, A. & Foshay, K. M. A Human Glioblastoma Organotypic Slice Culture Model for Study of Tumor Cell Migration and Patient-specific Effects of Anti-Invasive Drugs. J. Vis. Exp. 1–10, https://doi.org/10.3791/53557 (2017). 56. Ren, B. et al. Invasion and anti-invasion research of glioma cells in an improved model of organotypic brain slice culture. Tumori 101, 390–397 (2015). 57. Weil, S. et al. Tumor microtubes convey resistance to surgical lesions and chemotherapy in gliomas. Neuro. Oncol. 19, 1316–1326 (2017). 58. Gaggioli, C. et al. Fibroblast-led collective invasion of carcinoma cells with differing roles for RhoGTPases in leading and following cells. Nat. Cell Biol. 9, 1392–1400 (2007). 59. Harney, A. S. et al. Real-Time Imaging Reveals Local, Transient Vascular Permeability, and Tumor Cell Intravasation Stimulated by TIE2hi Macrophage-Derived VEGFA. Cancer Discov. 5, 932–943 (2015). 60. Weller, M. et al. Glioma. Nat. Rev. Dis. Prim. 15017, https://doi.org/10.1038/nrdp.2015.17 (2015). 61. Gurskaya, N. G. et al. Engineering of a monomeric green-to-red photoactivatable fluorescent protein induced by blue light. Nat. Biotechnol. 24, 461–465 (2006). 62. Alieva, M., Ritsma, L., Giedt, R. J., Weissleder, R. & van Rheenen, J. Imaging windows for long-termintravital imaging. IntraVital 3, e29917 (2014). 63. Carslaw, D. C. & Ropkins, K. Environmental Modelling & Software openair d An R package for air quality data analysis. Environ. Model. Softw. 27–28, 52–61 (2012). 64. Trapp G, H. E. Chemotaxis and Migration Tool. Ibidi cells Focus (2014). 65. Hoja, S. et al. Molecular dissection of the valproic acid effects on glioma cells. Oncotarget, https://doi.org/10.18632/oncotarget.11379 (2016). ## Acknowledgements We thank Martin Proescholdt, Department of Neurosurgery, University of Regensburg, Regensburg, Germany, for collaboration and providing the patient tumor material for BTIC generation. We cordially thank Birgit Jachnik, Department of Neurology, University Hospital Regensburg, Regensburg, Germany, for excellent technical assistance in establishment and cultivation of BTIC. This study was partly supported by the Bavarian Program for promotion of equal opportunities for women in research and teaching (to V.L.), the Wilhelm Sander-Stiftung, Munich and Ingolstadt, Germany (to P.H.), European Research Council Grant CANCER-RECURRENCE 648804 (to J.v.R.), the CancerGenomics.nl (Netherlands Organisation for Scientific Research) program (to J.v.R.), the Doctor Josef Steiner Foundation (to J.v.R) and the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 642866 (to J.v.R). ## Author information Authors ### Contributions M.A. contributed to the concept of the study, performed the in vivo assays and data analysis and wrote the paper; V.L. contributed to the concept of the study, performed the in vivo assays and wrote the paper; M.J.R. contributed to human in vivo data on translational aspects and edited the paper; C.K. contributed to the concept of the study and edited the paper; P.H. contributed to the concept of the study, characterized the BTIC lines and wrote and edited the paper; J.v.R. contributed to the concept of the study, supervised the in vivo experimental work, and wrote and edited the paper. ### Corresponding authors Correspondence to Maria Alieva or Peter Hau. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Alieva, M., Leidgens, V., Riemenschneider, M.J. et al. Intravital imaging of glioma border morphology reveals distinctive cellular dynamics and contribution to tumor cell invasion. Sci Rep 9, 2054 (2019). https://doi.org/10.1038/s41598-019-38625-4 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-019-38625-4 • ### GFAP splice variants fine-tune glioma cell invasion and tumour dynamics by modulating migration persistence • Rebeca Uceda-Castro • Jessy V. van Asperen • Elly M. Hol Scientific Reports (2022) • ### Spatiotemporal analysis of glioma heterogeneity reveals COL1A1 as an actionable target to disrupt tumor progression • Andrea Comba • Syed M. Faisal • Pedro R. Lowenstein Nature Communications (2022) • ### Mutational drivers of cancer cell migration and invasion • Nikita M. Novikov • Sofia Y. Zolotaryova • Evgeny V. Denisov British Journal of Cancer (2021) • ### Tumor cell plasticity, heterogeneity, and resistance in crucial microenvironmental niches in glioma • Erik Jung • Matthias Osswald • Frank Winkler Nature Communications (2021) • ### The oncogenic E3 ligase TRIP12 suppresses epithelial–mesenchymal transition (EMT) and mesenchymal traits through ZEB1/2 • Kwok Kin Lee • Deepa Rajagopalan • Sudhakar Jha Cell Death Discovery (2021)
proofpile-shard-0030-344
{ "provenance": "003.jsonl.gz:345" }
Monday, September 15, 2008 Lehman Brothers (1850-2008) Lehman Brothers were established, under this name, in 1850 when Mayer Lehman, the youngest brother (the guy on the right side), joined his older brothers, Henry Lehman (the real founder) and Emanuel Lehman (the guy on the left side), and emigrated from Bavaria to Alabama. Their first businesses were based on using cotton as a currency. The company has grown and survived many twists and turns, world wars, Great Depression, internal battles, and many bankruptcies of competitors in the financial ecosystem. However, it failed to survive the subprime-related crisis and today, after 158 years, it filed for bankruptcy, listing USD 613 billion of debts, making it the largest bankruptcy ever. Well, such things happen. I think that there are too many momentum speculators, derivatives, and investments detached from the fundamentals. The risk of many types of investments - arising at various time scales - has not always been properly quantified. From this viewpoint, it's great that one company focusing on these things will evaporate, together with the employees who are also focusing on things different from the actual content of the economy and the products. What is less great is an additional period of irrationality and volatility that we can expect. Many speculators will extrapolate various downward trends dramatically, causing additional problems to the world economy. I wonder why so many people are doing these irrational things and adding so much noise to the system. Cannot they try to invent their own realistic picture how the world should look like and what the prices should be, and simply sell overpriced things and buy the undervalued ones? Right now it also seems rather likely - because of the excessive volatility we have seen - that futures traders and other speculators (and not supply and demand) have been the main players behind the dramatic fluctuations of the oil price during the last year. That's too bad because speculators should normally help to quantify the true value of things - like commodities - and stabilize the prices because they should know when the price is undervalued or overpriced. This idealized picture apparently didn't occur during the last year or so. There are too many gadgets that try to extrapolate recent trends, whatever time scale they choose for the extrapolation. Such extrapolations inevitably lead to increasing leverage, growing bubbles, bursting bubbles, instabilities, and skyrocketing irrationality. There exist surprising additional sources of such instabilities. Other investment formats that superficially look like stabilizing effects actually act as destabilizing ones. For example, there exist twin-win funds that allow investors to earn money both in the case when a price increases as well as in the case when it drops. For example, if the oil price increases by X%, you get 0.95 times X% from your initial investment after 5 years (aside from the money you have paid). If it drops by X%, you receive 0.5 times X%, another positive number! This sounds great but it is probably not hard to achieve. The expert who manages the fund simply buys oil for your money, but as soon as the price drops below the initial level, he shorts oil. You might think that the existence of twin-win funds would stabilize the oil price because the fund manager is motivated to keep the oil price constant because when it is constant, he won't have to pay you much. ;-) However, when I thought about the situation twice, such a reasoning turned out to be flawed. Whether someone is motivated or not is not important. What matters is the actual impact of the decisions he is led to make. The fund manager doesn't directly determine the oil price. You must look what he is actually doing to eliminate the risk and to get the money that the investors will demand (plus some profit). For the sake of transparency, I will be talking about the strategy of a fund manager who doesn't rely on others (option sellers) who would be parts of the system. In other words, my fund manager below plays the role of all the traders who are needed to make the fund work. The conclusions of my discussion will be universal; if you considered a more complicated strategy, involving option seller etc. (a topic intensely debated in the fast comments: do options influence oil price? LM: They do!) and you would include all of their decisions into your research of the oil price, the oil price would be affected pretty much in the same way. OK, so what does my fund manager do? If the oil price exceeds the initial level, he must actually own the oil, so that he will be able to pay his investors if the oil price increases significantly. On the other hand, when the oil price drops below the initial level, he has to short oil. This means that such managers are going to short oil just when it decreases below the initial level (and buy it if/when it returns above the initial level). If you think what it means, it brings instability to the system because buyers stabilize the price when they "rationally" buy the product when the price goes below their perceived "fair interval". And they sell it when it gets above their perceived "fair interval". This is the standard sign determining the relationship between the supply and demand that helps to stabilize prices. But the twin-win fund manager is doing just the opposite. He sells (or shorts) oil when it gets below the initial value and buys it when it gets above the initial value. ;-) You can see that no one else in our example is buying or selling oil because of the existence of the twin-win fund. It follows that the twin-win fund magnifies market fluctuations. But honestly speaking, we shouldn't forget that the fund doesn't influence anything at time scales longer than 5 years because at the end, it sells all the oil that it bought and buys back all the oil that it shorted. ;-) But 5 years could be too long a time and the economies can be shattered earlier than that. This kind of instability from similar financial tools is going to be rather generic. Does it mean that your humble correspondent is going to defend some kind of regulation? Well, I have mixed feelings about it. But yes, if the government wants to punish a certain kind of behavior of the market players, the behavior of those who destabilize the system - who are doing things that can be demonstrated to have a destabilizing effect - should be among the punished ones. Such things should be taken into account when various policies (e.g. tax policies) are being designed. I could tell you formulae for Lumo's friction taxes that would moderate hysterias. ;-) In the ideal capitalist world as I imagine it, the fluctuations should be much lower. Meanwhile, the bloody fate of Lehman Brothers could remind the greedy investors that their strategies based on the analysis of the momentum (the time derivatives of the prices) - and not the fundamentals (the absolute value of the prices) - could turn out to be just another form of lottery. What do you think? Bonus: Famous companies save the world from climate change (Hat tip: Demesure) There have been two great companies that passionately led the global efforts to save the world from climate change. We should follow their examples. The names of these two companies were: • Enron • Lehman Brothers While Enron did everything it could to make the U.S. sign the Kyoto Protocol, the Lehman report "The business of climate change II" has been enthusiastically praised by the environmentalists, to use a polite word for the loons. The two climate change reports represent 50% of the recent Intellectual Capital :) of Lehman brothers (PDF files below the last link). You can buy this capital for 0.3% of the price one year ago. James Hansen is a part of the package. Will you join these two wise companies that can compare costs and benefits and quantify all the risks so well? ;-) Isn't it cool that these crooks are gone? Al Gore and Lehman brothers Yes, it's true. Check the National Post (Canada) where Richard Lindzen says the following about Al Gore: ... And he's on the on the board of Lehman Brothers who want to be the primary brokerage for emission permits. ... Rae Ann has pointed out that the information is not quite accurate but it has some true core, see the fast comments. More details at IceCap.US and ClimateAudit.ORG. 5 comments: 1. interesting point. If prices (or rather log prices to get rid of the currency and quantisation) were truly a Brownian process, then you are right, the price difference at time T+deltaT is independent of the time derivative. Unfortunately, prices are not Brownian walks. My reference for this is "Theory of Financial Risk and Derivative Pricing - from Statistical Physics to Risk Management" by Bouchaud and Potters - two physicists. They argue convincingly that for example the Dow Jones index is not a Brownian motion since extreme events happen far too often compared to a exp(-x^2) diffusion model. If you have some control over these deviations from diffusion, maybe you can make some money (better than lottery) from looking at time derivatives... Your description of the win-win situation misses another problem: The broker has to buy the oil when he already knows that it has gone up. Thus he always has to buy at too high a price and to short at too low a price. 2. Dear Robert, concerning the twin win fund, I don't quite understand how your comment differs from mine, except that yours sounds more vague. The strategy I recommended the manager is to buy the oil "right" above the initial price, and sells it "right" below it. There's no objective point of telling whether the initial price is too low or too high - it's defining level of the situation, exactly because the manager has to assume it is a Markovian process if he wants to eliminate risks. Of course, in reality, he can't buy and sell millions of times, so he will only act when the price deviates a few pips from the initial ones. Concerning the Brownian motion, you seem to mix Brownian motion and more general Markovian processes. Brownian motion might make big jumps Gaussian-unlikely but it doesn't generally follow from the Markovian character of the evolution. If someone assumes that the analysis of trends and momenta can't lead to useful profit (and I am not really claiming that, see below), it is equivalent to assuming that the process is Markovian. But that doesn't imply that there are no big jumps or that they decrease in the Gaussian way: the latter only follows for Brownian-like motion. I haven't claimed that the evolution is Brownian motion and, in fact, I haven't even claimed that it is Markovian. I just claimed that the speculators assuming and abusing the non-Markovianity of the evolution are counterproductive for the markets' stability and help to deviate the markets from the optimal equilibrium. And if something is punished by state policies and taxes, this is what should be punished. Best Lubos 3. "Concerning the Brownian motion, you seem to mix Brownian motion and more general Markovian processes." I guess you wanted to say martingales. A Markov process can still be well predictable to allow for technical analysis. 4. What I meant by "above/below the price" is that the broker will not buy when the price is only infinitessimally above the reference price since in that case he would we constantly trading. He would wait until the price is significantly up and then be paying significantly more. The point being that you can only tell the price goes up after it happened. You are right about me confusing gausseanity and markov (memoryless) evolution of prices. My confusion came about since the law of large numbers relates the two when sum many distributions. 5. Dear robert, thanks, now I think we are in agreement. Yes, he has to sacrifice an interval around the initial price, to avoid excessive frequency of trading. Incidentally, the question how many times a Brownian motion curve crosses y=0 (i.e. is the trader constantly trading) is interesting. This number can be clearly zero because Brownian motion is continuous and it is normal for such a function, y(x), to avoid y=0 for long intervals on the x axis. But when you choose a region where y switches from mostly negative to mostly positive values, how many times will you cross y=0? Is it finite or infinite? Is the typical intersection a "fractal"? Is it countable?
proofpile-shard-0030-345
{ "provenance": "003.jsonl.gz:346" }
» » » The percentage of working population in 2002 is # The percentage of working population in 2002 is ### Question The percentage of working population in 2002 is A) 50% B) 25% C) 8% D) 80%
proofpile-shard-0030-346
{ "provenance": "003.jsonl.gz:347" }
## Lakshya Education MCQs Question: 66 cubic centimetres of silver is drawn into a wire 1 mm in diameter. The length of the wire in metres will be: Options: A. 84 B. 90 C. 168 D. 336 Let the length of the wire be h. Radius = $$\frac{1}{2}mm=\frac{1}{20}cm. Then,$$ $$\frac{22}{7}\times\frac{1}{20}\times\frac{1}{20}\times h= 66$$ h =  $$\left(\frac{66\times20\times20\times7}{22}\right)=8400cm=84 m.$$ Earn Reward Points by submitting Detailed Explaination for this Question ## More Questions on This Topic : Question 1. A hall is 15 m long and 12 m broad. If the sum of the areas of the floor and the ceiling is equal to the sum of the areas of four walls, the volume of the hall is: Options: 1.    720 2.    900 3.    1200 4.    1800 2(15 + 12) x h = 2(15 x 12) h =  $$\frac{180}{27}m=\frac{20}{3}m.$$ So, volume =  $$\left(15\times12\times\frac{20}{3}\right)m^{3} = 1200m^{3}$$ Question 2. In a shower, 5 cm of rain falls. The volume of water that falls on 1.5 hectares of ground is: Options: 1.    75 cu. m 2.    750 cu. m 3.    7500 cu. m 4.    75000 cu. m 1 hectare = 10,000 m2 So, Area = (1.5 x 10000) m2 = 15000 m2. Depth = $$\frac{5}{100}m = \frac{1}{20}m$$ So, Volume = (Area x Depth) =  $$\left(15000\times\frac{1}{20}\right)m^{3}=750m^{3.}$$ Question 3. A right triangle with sides 3 cm, 4 cm and 5 cm is rotated the side of 3 cm to form a cone. The volume of the cone so formed is: Options: 1.      $$12\pi$$ cm3 2.       $$15\pi$$cm3 3.      $$16\pi$$ cm3 4.      $$20\pi$$ cm3 Clearly, we have r = 3 cm and h = 4 cm So $$Volume,= \frac{1}{3}\pi r^{2}h = \left(\frac{1}{3}\times \pi\times3^{2}\times4\right)cm^{3} =12\pi cm^{3}.$$ Question 4. A hollow iron pipe is 21 cm long and its external diameter is 8 cm. If the thickness of the pipe is 1 cm and iron weighs 8 g/cm3, then the weight of the pipe is: Options: 1.    3.6 kg 2.    3.696 kg 3.    36 kg 4.    36.9 kg Volume of iron  =  $$\left(\frac{22}{7}\times\left[(4)^{2}-(3)^{2}\right]\times21 \right)cm^{3}$$ = $$\left(\frac{22}{7}\times7\times1\times21\right)cm^{3}$$ =  462 cm3. So, Weight of iron = (462 x 8) gm = 3696 gm = 3.696 kg. Question 5. A boat having a length 3 m and breadth 2 m is floating on a lake. The boat sinks by 1 cm when a man gets on it. The mass of the man is: Options: 1.    12 kg 2.    60 kg 3.    72 kg 4.    96 kg Volume of water displaced = (3 x 2 x 0.01) m3 = (0.06 x 1000) kg = 60 kg. Question 6. 50 men took a dip in a water tank 40 m long and 20 m broad on a religious day. If the average displacement of water by a man is 4 m3, then the rise in the water level in the tank will be: Options: 1.    20 cm 2.    25 cm 3.    35 cm 4.    50 cm Total volume of water displaced = (4 x 50) m3 = 200 m3. So, Rise in water level = $$\left(\frac{200}{40\times20}\right)m0.25m=25cm.$$
proofpile-shard-0030-347
{ "provenance": "003.jsonl.gz:348" }
Reference from course note of Machine Learning Foundation from University of Washington ### Principle of Occam’s razor: Simpler trees are better Among competing hypotheses, the one with fewest assumptions should be selected ### Early stopping for learning decision trees 1. Limit tree depth: Stop splitting after a certain depth 2. Classification error: Do not consider any split that does not cause a sufficient decrease in classification error • Typically, add magic parameter $\epsilon$,Stop if error doesn’t decrease by more than $\epsilon$ • Some pitfalls to this rule • Very useful in practice 3. Minimum node “size”: Do not split an intermediate node which contains too few data points ### Pruning: Simplify the tree after the learning algorithm terminates 1. Simple measure of complexity of tree L(T) = number of leaf nodes 2. Balance simplicity & predictive power Desired total quality format Total cost = measure of fit + measure of complexity • $\lambda = 0$: standard Decision Tree learning • $\lambda = +\inf$: root, majority classification • $\lambda = 0 - +\inf$: balance of fit and complexity ### Tree pruning algorithm Donate article here
proofpile-shard-0030-348
{ "provenance": "003.jsonl.gz:349" }
# Neural Networks¶ Neural networks can be constructed using the torch.nn package. Now that you had a glimpse of autograd, nn depends on autograd to define models and differentiate them. An nn.Module contains layers, and a method forward(input) that returns the output. For example, look at this network that classifies digit images: convnet It is a simple feed-forward network. It takes the input, feeds it through several layers one after the other, and then finally gives the output. A typical training procedure for a neural network is as follows: • Define the neural network that has some learnable parameters (or weights) • Iterate over a dataset of inputs • Process input through the network • Compute the loss (how far is the output from being correct) • Propagate gradients back into the network’s parameters • Update the weights of the network, typically using a simple update rule: weight = weight - learning_rate * gradient ## Define the network¶ Let’s define this network: import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 5x5 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 5 * 5, 120) # 5*5 from image dimension self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Max pooling over a (2, 2) window x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # If the size is a square, you can specify with a single number x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = torch.flatten(x, 1) # flatten all dimensions except the batch dimension x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() print(net) Net( (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1)) (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1)) (fc1): Linear(in_features=400, out_features=120, bias=True) (fc2): Linear(in_features=120, out_features=84, bias=True) (fc3): Linear(in_features=84, out_features=10, bias=True) ) You just have to define the forward function, and the backward function (where gradients are computed) is automatically defined for you using autograd. You can use any of the Tensor operations in the forward function. The learnable parameters of a model are returned by net.parameters() params = list(net.parameters()) print(len(params)) print(params[0].size()) # conv1's .weight 10 torch.Size([6, 1, 5, 5]) Let’s try a random 32x32 input. Note: expected input size of this net (LeNet) is 32x32. To use this net on the MNIST dataset, please resize the images from the dataset to 32x32. input = torch.randn(1, 1, 32, 32) out = net(input) print(out) tensor([[-0.0543, -0.0981, 0.0761, 0.0311, 0.0532, -0.0299, -0.1133, 0.0779, Zero the gradient buffers of all parameters and backprops with random gradients: net.zero_grad() out.backward(torch.randn(1, 10)) Note torch.nn only supports mini-batches. The entire torch.nn package only supports inputs that are a mini-batch of samples, and not a single sample. For example, nn.Conv2d will take in a 4D Tensor of nSamples x nChannels x Height x Width. If you have a single sample, just use input.unsqueeze(0) to add a fake batch dimension. Before proceeding further, let’s recap all the classes you’ve seen so far. Recap: • torch.Tensor - A multi-dimensional array with support for autograd operations like backward(). Also holds the gradient w.r.t. the tensor. • nn.Module - Neural network module. Convenient way of encapsulating parameters, with helpers for moving them to GPU, exporting, loading, etc. • nn.Parameter - A kind of Tensor, that is automatically registered as a parameter when assigned as an attribute to a Module. • autograd.Function - Implements forward and backward definitions of an autograd operation. Every Tensor operation creates at least a single Function node that connects to functions that created a Tensor and encodes its history. At this point, we covered: • Defining a neural network • Processing inputs and calling backward Still Left: • Computing the loss • Updating the weights of the network ## Loss Function¶ A loss function takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target. There are several different loss functions under the nn package . A simple loss is: nn.MSELoss which computes the mean-squared error between the output and the target. For example: output = net(input) target = torch.randn(10) # a dummy target, for example target = target.view(1, -1) # make it the same shape as output criterion = nn.MSELoss() loss = criterion(output, target) print(loss) tensor(0.8723, grad_fn=<MseLossBackward0>) Now, if you follow loss in the backward direction, using its .grad_fn attribute, you will see a graph of computations that looks like this: input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d -> flatten -> linear -> relu -> linear -> relu -> linear -> MSELoss -> loss So, when we call loss.backward(), the whole graph is differentiated w.r.t. the neural net parameters, and all Tensors in the graph that have requires_grad=True will have their .grad Tensor accumulated with the gradient. For illustration, let us follow a few steps backward: print(loss.grad_fn) # MSELoss <MseLossBackward0 object at 0x7f059ca264a0> ## Backprop¶ To backpropagate the error all we have to do is to loss.backward(). You need to clear the existing gradients though, else gradients will be accumulated to existing gradients. Now we shall call loss.backward(), and have a look at conv1’s bias gradients before and after the backward. net.zero_grad() # zeroes the gradient buffers of all parameters loss.backward() conv1.bias.grad before backward tensor([0., 0., 0., 0., 0., 0.]) tensor([ 0.0187, -0.0123, 0.0154, 0.0155, -0.0117, 0.0113]) Now, we have seen how to use loss functions. The neural network package contains various modules and loss functions that form the building blocks of deep neural networks. A full list with documentation is here. The only thing left to learn is: • Updating the weights of the network ## Update the weights¶ The simplest update rule used in practice is the Stochastic Gradient Descent (SGD): weight = weight - learning_rate * gradient We can implement this using simple Python code: learning_rate = 0.01 for f in net.parameters(): However, as you use neural networks, you want to use various different update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc. To enable this, we built a small package: torch.optim that implements all these methods. Using it is very simple: import torch.optim as optim optimizer = optim.SGD(net.parameters(), lr=0.01) Observe how gradient buffers had to be manually set to zero using optimizer.zero_grad(). This is because gradients are accumulated as explained in the Backprop section.
proofpile-shard-0030-349
{ "provenance": "003.jsonl.gz:350" }
## Friday, November 11, 2016 ### First Life Here I'm gonna show you what I've made so far :). This post contains a lot of long gifs so probably don't read this if you are using data. Most of them are also fairly low quality cause I'm still figuring out screen capturing. Here is the very first one. Its goal is simply to keep its head up. It tried very hard but didn't have much to work with. I made a slightly different design with the same goal, and this one did pretty much what was expected This one later achieved a better score, it gets points for creativity Next I wanted to give them control over rotation, not just contraction of muscles. With the same goal, this one learned to "shuffle". It had too much control that it could just stay upright even if that wasn't physically possible. I went back to tweaking my physics engine because some behavior didn't look quite right. I finally got it working reasonably and gave one the goal of getting the highest average y. It is the first one to learn to "hop" I thought this hopping was pretty cool so I arranged the spheres in a different position to see if they could hop better. Because they often moved around quite a bit while hopping I put it on a small platform so it had to jump fairly straight up. If they fell off the platform before 5 seconds was up they would immediately fail. This one figured out that as long as you stay in the air long enough you're fine to go off the platform Since they got pretty good at jumping, I figured I'd give them a more difficult goal: get as far as possible without falling off the platform. This was pretty easy to do, so I made it more difficult: I was curious how well it would do with this setup, and the answer is surprisingly very well So that's the current progress. It's pretty fun to set up strange goals for the creatures and see them manage to attain them. Most of these figured out their strategies in 2-5 minutes (from scratch) as well, since my physics engine can run many simulations very quickly, which makes this very fun to use in real time. So yea, after these promising results I'm excited to see where things go from here :) ### How to simulate bones and joints My custom physics engine had collisions working great with velocities and gravities and such, however I needed a way to give "bones" and "joints" to creatures. At the moment my creatures are composed of spheres. Here is an example of a "spider": Each line you see is a bone, which you can think of as a rod between those two spheres. I can mark a bone as "usable" by the creature, which allows the creature to vary the length of that rod over time. Otherwise they are a fixed length. I can also add a "joint" to any sphere, which restricts the ways in which two different bones attached to that sphere interact with each other. The intent here is to model simplified versions of joints that are occurring in real life biology: ball joints, hinge joints, saddle joints, etc. Now, simulating bones and joints is hard, mostly because if you don't do it right they can explode under harsh conditions. Basically what happens is that if you run your simulation one step, then try and "correct" for where bones want their attached spheres' to be, if they are too far off the correction will be too harsh and the next frame more correction will be needed and this gets worse and worse until eventually your objects are accidentally at infinity. Sometimes it is not that severe, but you can still get "stuttering" if not programmed right which would be nice to avoid. After about a week of trying various things, I found a simple method that works fairly well for simulating bones, collisions, and any kind of joints that I think is worth sharing here. First, we need to talk about Vector Projections and Vector rejections. These are very general concepts that make a large amount of physics become very simple. Graphic is from Wikipedia In this picture, our two vectors are $a$ and $b$. The projection of $a$ onto $b$ is the part of $a$ that goes along $b$, which in this case is $a_1$. The rejection of $a$ from $b$ is the part of $a$ that is perpendicular to $b$ which is $a_2$. Luckily, these are very easy to compute in any dimension. The projection is simply: $$a \text{ project onto } b = b (a \circ \frac{b}{\vert b \vert})$$ Where $\frac{b}{\vert b \vert}$ is simply the normalized vector in the direction of $b$ (with magnitude of 1), and $\circ$ is the dot product. Because the projection plus the rejection is the original vector, to find the rejection we can simply do: $$a \text{ reject from } b = a - (a \text{ project onto } b)$$ To see an application of these, imagine that we have a ball that ran into the floor with a diagonal velocity: The arrow pointing down is the velocity, the arrow pointing up is the normal. The normal is just a fancy word for the vector that is pointing away from the surface that you hit. One way to think of a normal is that if you accidentally went in the ground, that is the direction that you would have to be pushed in to get back out of the ground. So if we know our ball is about to hit the ground, we don't want its velocity to keep pushing it into the ground. Instead, we have two choices: bounce, in which case the y component of the velocity is flipped, or don't bounce: in which case the y component of the velocity is removed and the ball rolls to the side. With a surface that is flat this is easy to do. However lets say you hit a floor that is tilted. How do you find the new velocity? The answer is very simple: use the vector rejection. Specifically, given a normal $n$ (we assume is normalized) and a velocity $v$, we first compute $n \circ v$. If this value is negative, $v$ isn't even going towards the surface so we can just return $v$. If this value is positive, we can simply return $$v \text{ reject from } n$$ if we don't want to bounce, or $$v \text{ reject from } n - v \text{ project to } n$$ if we do want to bounce. The reason this works is because the vector rejection gives us the component of our velocity that is not along the normal, which is the only part that is left if we don't bounce. If we do bounce, we want to add the negated component of the velocity that is going along the normal (the projection). So that's pretty great that you have such a clean formula for any collision. Now we want to simulate a bone. To do this, I first run my physics simulation for one tick assuming the bone doesn't exist, then for every bone (and joint - this technique is very general) I do the following four steps: *Edit: Turns out this doesn't quite work as intended - it is not energy conserving, and as a result things are able to "fly" by manipulating the added energy. While cool, this isn't what I intended and removes quite a bit of cool behavior cause the strategy is usually "fly to the goal". Because that didn't work, this is a new method that is working great for simulating bones. I haven't been able to figure out joints yet, but this is a start and you can still do quite a bit with bones. 1. Each bone has a "desired distance". If the bones are close enough to that desired distance apart (I use 0.05 or less), do nothing. Otherwise, if the actual distance between the two spheres connected by the bone is too big, pretend that they hit a surface outside the rod that is pushing them in. If it is too small, pretend that they hit a surface inside the rod that is pushing them out. 2. Use the non bouncing collision algorithm above with those normals. 3. The non bouncing collision algorithm might have caused some velocity to be lost. Compute the lost velocity of each sphere (simply by taking original velocity - new velocity), then average these lost velocities together. Now each sphere's actual resulting velocity is new velocity + average of lost velocities. This works great with fixed bone sizes, assuming the spheres start at the desired distance apart. However, we would like the creature to be able to change a few bone's sizes to act as muscles instead of bones. The problem is that 1-3 doesn't add velocity to correct for this changed size. For example, if both spheres are sitting on the ground, they don't have any velocity so the collision algorithm doesn't do anything to pull them together. To fix this, we can do: 4. Use a variable named "prevDistance". Initially this is set to the desired distance. Now, if the desired distance is changed, prevDistance no longer equals the desired distance. If they aren't equal, add a velocity of desired distance - prevDistance to each sphere, pointed towards away from each other (if desired distance - prevDistance is negative this will add a velocity that is pointing towards each other). Now, set prevDistance to the current distance between spheres. What this does is continue to add velocity until the spheres reach their new desired position. Once that happens they run as normal, using only steps 1-3, because prevDistance is approximately equal to desired distance. In code (C#) we have: public static Vector3 VectorProjection(Vector3 a, Vector3 b) { return b*Vector3.Dot(a, b.normalized); } public static Vector3 VectorRejection(Vector3 a, Vector3 b) { return a - VectorProjection(a, b); } public static Vector3 VectorAfterNormalForce(Vector3 force, Vector3 normal) { Vector3 rejection = VectorRejection(force, normal); Vector3 projection = VectorProjection(force, normal); float pointingInSameDirection = Vector3.Dot(normal, force); // Not pointing in same direction if (pointingInSameDirection <= 0.0f) { return rejection; } // Pointing in same direction else { return rejection + projection; } } public static void ApplyBone(Bone bone) { float curDist = Vector3.Distance(bone.me.pos, bone.other.pos); if (Math.Abs(bone.desiredDist - curDist) >= 0.05f) { Vector3 collisionNormal = ((curDist - bone.desiredDist) / Math.Abs(bone.desiredDist - curDist)) * (bone.me.pos - bone.other.pos).normalized; Vector3 myNotAlongNormal = VectorAfterNormalForce(bone.me.velocity, -collisionNormal); Vector3 otherNotAlongNormal = VectorAfterNormalForce(bone.other.velocity, collisionNormal); Vector3 myAlongNormal = bone.me.velocity - myNotAlongNormal; Vector3 otherAlongNormal = bone.other.velocity - otherNotAlongNormal; Vector3 averageAlongNormal = (myAlongNormal + otherAlongNormal) / 2.0f; bone.me.velocity = myNotAlongNormal + averageAlongNormal; bone.other.velocity = otherNotAlongNormal + averageAlongNormal; Vector3 normalPointingAway = -(bone.other.pos - bone.me.pos).normalized; if (bone.prevDistance != bone.desiredDist ) { bone.other.velocity += -normalPointingAway* (bone.desiredDist - bone.prevDistance); bone.me.velocity += normalPointingAway* (bone.desiredDist - bone.prevDistance); } bone.prevDistance = curDist; } }
proofpile-shard-0030-350
{ "provenance": "003.jsonl.gz:351" }
Calculus Precalculus: Mathematics for Calculus (Standalone Book) Points on the Unit Circle Find the missing coordinate of P , using the fact that P lies on the unit circle in the given quadrant. Each exercise provides a drawing of the circle as well as the length of the radius. Unit circle worksheet with answers find angle based on unit circle worksheet a b c solutions drive free worksheets with answer keys on unit circle period unit circle. Students will understand the. I would create one for my trigonometry students I figured. The circle is divided into 360 degrees starting on the right side of the x-axis and moving. Unit Circle Trigonometry Labeling Special Angles on the Unit Circle Labeling Special Angles on the Unit Circle We are going to deal primarily with special angles around the unit circle, namely the multiples of 30o, 45o, 60o, and 90o. You can generate the worksheets either in html or PDF format — both are easy to print. Your goal is to be able to find exact values quickly, without having to look at. com so we can get it fixed. chapter 30 section 1 guided reading moving toward conflict, Dont Read In The Closet Volume Two Ebook Blaine D Arden. ( x – 7 ) 2 + ( y – 9 ) 2 = 25 = 5 2. Optional: Unit Circle Game. How do I identify lines and line segments that are related to a circle? Standard MM2G3. 3 Day 4: Project. 1 Reference Triangles and Reciprocal Trig Functions 9. The top number 2 in the numeral is the numerator. Using the Unit Circle and Arc Length to see Radians. only in the activities and worksheets. Powered by Create your own unique website with customizable templates. Apply the distance formula to find the length of one side, which is 6. Displaying top 8 worksheets found for - Circle Review Answer Key. 12b-Magnetism FR practice problems. If you are struggling as well, then all you need balancing equations worksheet with answers. Trigonometry questions with answers. *We will rationalize 1/√2 to be √2/2. Or if you need, we also offer a unit circle with everything left blank to fill in. The angle (in radians) that $t$ intercepts forms an arc of length $s$. Answers to Workbook p. Identify angles coterminal to angles in standard position given in radians or degrees. Range of Sine and Cosine: [- 1 , 1] Since the real line can wrap around the unit circle an infinite number of times, we can extend the domain values of t outside the interval [,02 π]. What is the unit circle? The unit circle has a radius of one. C : Complex Functions 19. Math Worksheets High School Math based on the topics required for the Regents Exam conducted by NYSED. Day 2 - Graphing. Circle the correct answer. Unit circle worksheets math worksheets 4 kids station 2 work the unit circle unit circle w everything charts worksheets 35 examples unit circle worksheet with answers. 0 Special Right Triangles - Click HERE Notes - 7. Basic geometry is the study of points, lines, angles, surfaces, and solids. Worksheet by Kuta Software LLC Precalculus Unit Circle Practice #2 Name_____ ©X p2]0w1k8j jKEuNtAa] oSFoLf\ttwzaLr\eq ZL]LLCW. About This Quiz & Worksheet. Students will understand the. The unit circle triangle is similar to the 3-4-5 right triangle. N ok7ust4ad kshobfct7w9ahrmev il wlkceq g oaplvlv brbizg0h4thsz mr5eis ceer7vfe 2dyw b qmfa2dceu bwxibtshs yikndfvinnmihtwey saalxg0ebbdrbak s2ac worksheet by kuta software llc the unit circle name degrees radians conversion practice date convert each degree measure into radians. Find the exact value of the trig ratio without using a calculator. Many teaching resources such as activities and worksheets are provided here. Creative Commons licence attribution details The EdPy Lesson Plans Set is comprised of the EdPy worksheets, activity sheets, and this guide. Unit circle worksheet with answers find angle based on unit circle worksheet answers printableworksheetsfo unit circle worksheets math worksheets 4 kids unit circle. BIDMAS Questions, Revision and Worksheets Bidmas Bodmas orders of operation Level 1 Level 2 Level 3 BIDMAS Questions, Revision and Worksheets GCSE Maths Level 1 Level 2 Level 3 Prime Factors LCM HCF Worksheets, Questions and Revision HCF highest common factor LCM lowest common multiple prime factor decomposition prime factorisation Prime Factors Level 1 Level 2 Level 3 Prime Factors LCM HCF. The Unit Circle. How many diameters long is the piece of string (use a marker to mark each diameter on the string)? Tape the string to this page. 1) tan θ x y 60 ° 2) sin θ x y 225 ° 3) sin θ x y 90 ° 4) cos θ x y 150 ° 5) cos θ x y 90 ° 6) tan θ x y 240 ° 7) cos θ x y 135 ° 8) tan θ x y 150 °-1-. Precalculus Worksheets. How much did the small business pay in taxes last year? _____. Symbols Pleasant Right Triangle Trigonometry Worksheets For from Unit Circle Worksheet, source:indexxit. Free worksheets(pdf) and answer key on Periodic Trig functions -unit circle, sine,cosine, tangent, and their period, frequency and more. True False. *Please let me know if you find any typos in this answer key - I'm sure there are some! Note, you can avoid using negative exponents by using the inverse of the power of 10 in front of the prefix - this is equal to having the power of 10 in front of the base unit. Acces PDF Digestion Worksheet Answer Key food. Practice III. A : Operations with Polynomials 18. Area of Circles Worksheet 2 RTF Area of Circles Worksheet 2 PDF Preview Area of Circles Worksheet 2 in Your. Cards Circle Theorems Match Up Pdf. Click here to preview the answers for this assignment. Name: Answer Key Class: Gr. Worksheet | Trigonometry of Obtuse Angles True or False* Use trigonometric graphs or the unit circle to help you decide if these trig statements are true or false. Book Never Written Worksheet Answers. 4 Quiz on Friday 5/1 13. Unit A1 Key Vocabulary Flash Cards; Topic 1: Variables and Expressions. Blank Unit Circle Worksheet: Practice your skills by identifying the Radian Measure, Degree Measure and Coordinate for each angle. Introduction to the unit circle | Trigonometry | Khan Academy Extending SOH CAH TOA so that we can define trig functions for a broader class of angles. Create worksheets, tests, and quizzes for Pre-Calculus (Trigonometry with Math (Random) Review. Showing top 8 worksheets in the category unit circle practice. Dictionary Practice. So, get it into the form ax2 bx c. Day 3 - The Unit Circle - Notes. unit conversion word problems worksheet with answers Tag 3rd grade eog math practice worksheets converting units of measurement mystery multiplication sheets exercises for 1 word problems adding and subtracting fractions parcc pdf metric worksheet 5th conversion answer key system - hockeyofficialauthentic. unit circle worksheet ALL Mr. To get the PDF worksheet, simply push the button titled "Create PDF" or "Make PDF worksheet". Using the unit circle calculator is easy and quick. Day 3 - The Unit Circle - Notes. Trigonometry Worksheets & Problems. All angles throughout this unit will be drawn in standard position. If you're seeing this message, it means we're having trouble loading external resources on our website. Therefore, 1° = radians, and 1 radian = degrees. org are unblocked. 99 Store Brand Toasted Oats 14 ounces $2. Each worksheet is randomly generated and thus unique. Yehlen's Trig Values of The Unit Circle Worksheet Video Answer Key Unit Circle Practice worksheet Help help with the worksheet from week 2. Often, students lose hope and struggle to solve it. This game is similar to tic-tac-toe. Unit circle worksheets math worksheets 4 kids station 2 work the unit circle unit circle w everything charts worksheets 35 examples unit circle worksheet with answers. Answers Angles and Angle Measures Worksheet. We provide Unit Circle Worksheet With Answers and numerous book collections from fictions to scientific research in any way. Pronouns Worksheet - Introduction to pronouns. 12a-Magnetism MC practice problems. unit circle worksheet ALL Mr. The concepts below are not on Delta Math, so if you need to practice them use the links below for practice problems. It has all of the angles in Radians and Degrees. 5 3 S radians 10. Meiosis Mitosis Worksheet Worksheets - Guillermotull from Mitosis Worksheet Answer Key, source:guillermotull. Showing top 8 worksheets in the category - Answer Key. Unit 1: Prerequisites Review of Exponents Polynomial Operations HW: Worksheet Baseline and Calcs Factoring DOTS, GCF, Trinomial a=1 HW: Day 1 Factoring AC method HW: Day 2 Factor Completely and Sum and Difference of Perfect Cubes HW: Day 3 Solving Quadratics HW: Worksheet Extra Factoring Practice HW: Worksheet Long Division Long Division Day 2. Day 1 - Textbook Homework page 328 - Answer Key. UNIT CIRCLE EXTRA PRACTICE!!! fILL IN THE UNIT CIRCLE. 6 Review Key; 8-6 B answer key; 8-5 B Answer key; 8. To leave the "Quiz Me" feature, click RESET (or press "r" on the keyboard). Paragraph structure PRACTICE WORKSHEET Directions: Underline in blue the topic sentence. superteacherworksheets. Energy And Fossil Fuels Worksheet Answers. Underline in green the concluding statement. Metric mania length answer key worksheets kiddy math metric mania worksheets printable worksheets metric mania worksheets kiddy math mobi metric mania conversion. Unit 5 - Chapter 5. com Name : Sheet 1 Answer key Use the unit circle to !nd the measure of angle formed by the terminal. Finding Trigonometric Functions of Special Angles (Degrees) f. Suppose θ is measured in degrees. unit circle worksheet ALL Mr. Each exercise provides a drawing of the circle as well as the length of the radius. A diameter of a circle is a line segment that passes through the centre of the circle, connecting two points on the circle. 2 Review Review Game 4. 7 2 B The correct answer is choice (B) 25. Acces PDF Digestion Worksheet Answer Key food. This is the answer key for the following worksheet: Capitalization Practice Worksheet. Answers to Lesson 9-11 HW worksheet Answers to Lesson 12 HW worksheet Answers to Limit Worksheets (due Thursday, Sept. 140 2 y z 9. Answers, Staar Bright Reading Answer Key 4th, Othello Reading Guide Answers, Ready New York Ccls Answers Grade 7, Springboard Geometry Getting Ready Unit 2 Answers, a new deal fights the depression guided reading, Anthropology Of Language Workbook Reader Answer Key, Chapter 19 Section 1 Guided Reading Postwar America Answer. Unit 2: Polynomials Monday 9/30/19 1st/2nd block Assignment to be completed by the end of the class period. Answers Worked out Solutions. accompanied by them is this Unit Circle Worksheet With Answers that can be your partner. 5 Assignments. UNIT 3-4 Worksheet Name: 1. For a unit circle, r = 1 and so the circumference = 2π. Pre-Calculus Honors. N ok7ust4ad kshobfct7w9ahrmev il wlkceq g oaplvlv brbizg0h4thsz mr5eis ceer7vfe 2dyw b qmfa2dceu bwxibtshs yikndfvinnmihtwey saalxg0ebbdrbak s2ac worksheet by kuta software llc the unit circle name degrees radians conversion practice date convert each degree measure into radians. Unit Circle Worksheet with Answers. from -1 to 1, both have. The cell cycle worksheet & The Cell Cycle Worksheet Chapter 12 from The Cell Cycle Worksheet Answer Key, source: ngosaveh. Notes 2B Flowchart and Paragraph Proofs Key 2; Notes 3 Unit 2B fill in the blank proofs; Notes 4 Whole Proof Key; WS 1 Reasons and Conclusions Key; WS 2 Flowchart and Paragraph Proofs Key; WS 3 Fill in the blank Key; WS 4 More Reasons and Conclusions Key; WS 5 More Practice Segment and Angle Proofs Key; WS 6 Geometry Short Proofs Key. Introduction to the unit circle | Trigonometry | Khan Academy Extending SOH CAH TOA so that we can define trig functions for a broader class of angles. and lights the bulb. If you're behind a web filter, please make sure that the domains *. Amplitude is the vertical distance between the sinusoidal axis and the maximum or minimum values. Printable in convenient PDF format. How do I identify and apply all the properties of a circle? Standard MM2G3. What is the length of one side of the square? A. Question: UNIT 6 WORKSHEET 7 USING THE UNIT CIRCLE 3or 120° 210 Tx 11. mathworksheets4kids. unit 1 - intro to Trig. 2 Reference and Special Angles 9. FREE Reading Comprehension Worksheets Reading is a very important part of learning a language. I can combine many skills to answer questions about periodic functions and the unit circle. Showing top 8 worksheets in the category unit circle practice. 18 S radians 11. The radius of the circle is also the hypotenuse of the right triangle and it is equal to 1. Covers the following skills: Select and apply techniques and tools to accurately find length, area, volume, and angle measures to appropriate levels of precision. A (Random) The Unit Circle. q 7 4 S radians 5. there is an answer key at the end of each subject. Unit Circle Mazes Students will practice fluency in identifying unit circle values for angles given in degrees or radians. Resize a circle and compare its radius, circumference, and area. unit conversion word problems worksheet with answers Tag 3rd grade eog math practice worksheets converting units of measurement mystery multiplication sheets exercises for 1 word problems adding and subtracting fractions parcc pdf metric worksheet 5th conversion answer key system - hockeyofficialauthentic. DAY 9 Terminal points on Unit Circle. Unit circle calculator is an extremely handy online tool which computes the radians, sine value, cosine value, and tangent value if the angle of the unit circle is entered. 2 Trigonometric Functions: The Unit Circle 4. Plus model problems explained step by step. 3B Using the Unit Circle Page 1 of 2 Name_____ Date_____ Period_____ Worksheet 5. Showing top 8 worksheets in the category - Answer Sheet For Grammar Practice Grade 9 Unit 10. pdf from PSYC 111A at University of St. An Educational platform for parents and teachers of pre-k through 5th grade kids. Here is the worksheet with spaces for the students to answer the questions. Blank Unit Circle Worksheet: Practice your skills by identifying the Radian Measure, Degree Measure and Coordinate for each angle. Explain how the unit circle in the coordinate plane enables the extension of trigonometric functions to all real numbers, interpreted as radian measures of angles traversed counterclockwise around the unit circle. 91 Cellular Respiration An Overview Worksheet Answer Key. 1] 4] 7] 10] 2] 8] 5] yellow black red purple light green light green dark green yellow red dark blue pink orange orange brown dark blue 3] 6] 9]. Yehlen's Trig Values of The Unit Circle Worksheet Video Answer Key Unit Circle Practice worksheet Help help with the worksheet from week 2. For this unit circle worksheet, students answer 90 short answer questions about the unit circle. The Chapter 6 Resource Masters include the core materials needed for Chapter 6. 8 Applications and Models: Test 1 Test 2 Test 3 Test 4 Test 5 Test 6 Test 7: Test-out 1 Test-out 2 Test-out 3. This is an online quiz called UNIT CIRCLE QUIZ COORDINATES ONLY There is a printable worksheet available for download here so you can take the quiz with pen and paper. Unit circle plays a vital role in trigonometry. Students use their solutions to navigate through th. 2 International System of Measurements 1. 1) 12 in 2) 14 km 3) 9 m 4) 11 cm 5) radius = 2. Chapter 11 Mid Chapter Test Geometry Answer Key. This will be given after school on Wednesday, November 28th. Preview this quiz on Quizizz. subtends an arc length equal to 1 radius. Unit circle worksheets math worksheets 4 kids station 2 work the unit circle unit circle w everything charts worksheets 35 examples unit circle worksheet with answers. 1) tan θ x y 60 ° 2) sin θ x y 225 ° 3) sin θ x y 90 ° 4) cos θ x y 150 ° 5) cos θ x y 90 ° 6) tan θ x y 240 ° 7) cos θ x y 135 ° 8) tan θ x y 150 °-1-. Read each question carefully to make sure you understand the type of answer required. unit circle worksheet ALL Mr. 2 km 8) radius = 29. Special Right Triangles and the Unit Circle 15 February 20, 2009 Feb 19­10:29 AM 30 45 60 90 120 135 150 180 210 225 240 270 300 315 330 360 opposite adjacent hypotenuse x y r sin = y r cos = x r tan = y x. Circumference and Area of Circles Date_____ Period____ Find the area of each. Unit 2: Polynomials Monday 9/30/19 1st/2nd block Assignment to be completed by the end of the class period. All worksheets created with Infinite Geometry. Answers sheets, Present simple, past simple, present continuous, verb to be, reading, writing, esl. SUBJECTS Algebra 2 Textbook answers Questions Review. Day 1 - Graphing Sine and Cosine - Notes. Unit circle practice part 1 worksheet free printables mathvox unit circle practice worksheets worksheet 1 unit 1 unit circle docs unit circle practice part 1. Homework-Unit Circle Day 1. Answer Key Web Resources Unit Circle Game Graph and Formula of the Unit Circle Unit Circle Printables (fill in the blank unit circle) Graph of Sine to Unit Circle Finding the Reference Angle Converting Radians to Degrees Period of Sine and Cosine curves. If you don't see any interesting for you, use our search form on bottom ↓. in descending order of the exponents. 3 Go straight on and turn right at the cinema. Icivics The Market Economy Worksheet P2 Answers. This game is similar to tic-tac-toe. Here you can download a copy of the unit circle. Introduction to the unit circle | Trigonometry | Khan Academy Extending SOH CAH TOA so that we can define trig functions for a broader class of angles. 3 Problem Solving with Rates 2. gradualism C. Acces PDF Digestion Worksheet Answer Key food. 4 Practice Test Key Ch. Unit circle plays a vital role in trigonometry. The definition of a unit circle is: x. Unit Circle Worksheet and Answer Key - mathwarehouse. Worksheet | Trigonometry of Obtuse Angles True or False* Use trigonometric graphs or the unit circle to help you decide if these trig statements are true or false. superteacherworksheets. Unit Circle Fill In the Blank VI. Unit circle worksheets math worksheets 4 kids station 2 work the unit circle unit circle w everything charts worksheets 35 examples unit circle worksheet with answers. Equation of a Circle Worksheet. Students use their solutions to navigate through th. The Angles Worksheets are randomly created and will never repeat so you have an endless supply of quality Angles Worksheets to use in the classroom or at home. I carefully glued the last piece onto the model. If we take the two colored parts, we have taken two of the three equal parts. Where do you find section review questions? 6. 18 250 640 D AD is a diameter of a circle centre O. 3 Right Angle Trigonometry p. If you don't see any interesting for you, use our search form on bottom ↓. Given the representation of a chlorine atom, which circle might represent an atom of fluorine? ? None of these ? Circle B ?. We provide Unit Circle Worksheet With Answers and numerous book collections from fictions to scientific research in any way. org are unblocked. The Unit Circle Practice filling in this unit circle until you can complete it in 5 minutes. - ppt video online download #156743 DNA Replication Worksheet Answer Key (1). Worksheets include reading comprehension questions, a word search, literature circle role sheets, writing prompts, and vocabulary activities. Printable in convenient PDF format. Home - Warren County Public Schools. 3B Using the Unit Circle Page 1 of 2 Name_____ Date_____ Period_____ Worksheet 5. Welcome to The Teacher's Corner Printable Worksheets! We have developed several completely free generators for you to use to make your own worksheets quick and easy. A radius of a circle is a line segment that connects the center to a point on the circle. Sa 270 300 5T Use The Unit Circle Above To Find The Exact Value Of Each Of The Following. This Blog is create for anyone, we don't charge. Many teaching resources such as activities and worksheets are provided here. There will be a quiz over unit circle trig tomorrow! Give letter to parents! Unit Circle Review WS. Cards Circle Theorems Match Up Pdf. Unit Review 2 - Answer Key. Hack your grades and get the answer keys you need with the information below. only in the activities and worksheets. Below are a number of worksheets covering trigonometry problems. Homework-Unit Circle Day 1. Unit circle plays a vital role in trigonometry. Unit Circle Filled In Web Resources Unit Circle Game Graph and Formula of the Unit Circle Unit Circle Printables (fill in the blank unit circle) Graph of Sine to Unit Circle Finding the Reference Angle Converting Radians to Degrees. Showing top 8 worksheets in the category - Answer Sheet For Grammar Practice Grade 9 Unit 10. Unit circle practice part 1 worksheet free printables mathvox unit circle practice worksheets worksheet 1 unit 1 unit circle docs unit circle practice part 1. Worksheet 10. If you're seeing this message, it means we're having trouble loading external resources on our website. Trigonometry questions with answers. The unit circle, math homework help; the of these answers are simply found using the unit circle attached some unit circles for you to look. Read them and answer them format. Day 2 - Textbook page 318 Homework - Answer Key. Unit Rates and Ratios of Fractions Independent Practice Worksheet Answers. 4 - Practice ALL Circle Angles. Dictionary Practice. Displaying top 8 worksheets found for - Unit Circle Practice. We usually use the key word per or the division symbol / to indicate a unit rate. How to Graph the Sine, Cosine and Tangent Functions using the Unit Circle? The following diagram shows the unit circle and the trig graphs for sin, cos, and tan. Week 1: Unit circle art project Create a unit circle art project which labels both radians and degrees in 15 degree increments. When a piece of liver is dropped into hydrogen peroxide, the peroxide bubbles vigorously as a result of what reaction? peroxide being broken into water and oxygen peroxide is destroying germs in the. Geometry Midsegments Of Triangles Worksheet Answers. M P nAwlhlw arIiBgohrtnsY [rOeCsdekrtvBejd]. D Complex. r 315? Sa 270 300 5T Use The Unit Circle Above To Find The Exact Value Of Each Of The Following. LT5 Video: AC vs. Trigonometry questions with answers. 5 Homework Assignments AND Study Guide. The answer they get will be written at the last drill of the circle. An arc is a curved part of the circle. In this case, the circle intercepts the axes at y = ±1 and x = ±1. Choice (A) is incorrect because the number of unit squares on a line segment were counted to. subtends an arc length equal to 1 radius. Challenge Problems IV. 2 Practice Answers 4. The greengrocer's is next to the bank. symbiosis D. Sine is opposite over hypotenuse. We saw earlier that a complete revolution of the “trig circle” is 360° or $$2\pi$$ radians. UNIT 9 The Unit Circle. Unit Circle Mazes Students will practice fluency in identifying unit circle values for angles given in degrees or radians. Memorize unit circle. worksheet - unit 3 Task 1 - Based on your understanding of the unit, which four teaching methods do you think have most influenced current TEFL practice? Give a brief summary of each and give reason(s) for your choice. BOOKLECTION. Once you find your worksheet, click on pop-out icon or print icon to worksheet to print. r 315? Sa 270 300 5T Use The Unit Circle Above To Find The Exact Value Of Each Of The Following. We provide Unit Circle Worksheet With Answers and numerous book collections from fictions to scientific research in any way. 7 Inverse Trigonometric Functions 4. Read each set of guide words. Darwin believed in the idea that evolution happened slowly over a long period of time called _____ A. 2) One leg is on the positive x – axis. Unit circle practice part 1 worksheet free printables mathvox unit circle practice worksheets worksheet 1 unit 1 unit circle docs unit circle practice part 1. The students will answer questions they can find by looking at their unit circle. Here is the Unit Circle Worksheet section. When we talk about 10th Grade Math Worksheets with Answer Key, we already collected some variation of images to complete your references. 7 12 S radians 7. 10/29 A-day. 4 Trigonometric Functions of Any Angle 4. trigonometry word problems worksheet with answers Question 1 : The angle of elevation of the top of the building at a distance of 50 m from its foot on a horizontal plane is found to be 60 degree. Answer Key's to Worksheets I believe that students do need to practice a skill in order to get good at something. 2 Trigonometric Functions: The Unit Circle p. Math Teacher Math Classroom Teaching Math Teaching Ideas Future Classroom Teacher Stuff Teaching Resources Classroom Ideas College Teaching. Since we are using the unit circle, we need to put the 30-60-90 triangle inside the unit circle. Worksheets for Music Theory Fundamentals. Showing top 8 worksheets in the category - Unit Circle. 5 Rational Functions Chart KEY Rational Functions Graph KEY Ch. HINT: Use a triangle instead of a unit circle. 50/hour, and means$8. Questions on Angles in Standard Position. Friday: Trig quiz. Played 309 times. 3 S radians 2. An angle of 1 radian is an angle at the center of a circle measured in the counterclockwise direction that. 6 C - Key; 7. The printables on this page were created to go along with the children's fantasy novel Charlotte's Web, by E. Radians and Unit Circle Practice study guide by sylviajallen includes 34 questions covering vocabulary, terms and more. Displaying top 8 worksheets found for - Unit Circle Practice. Unit Circle 17. Precal Matters WS 5. Some of the worksheets displayed are 30 60 90 triangle practice find the missing side leave your answers as a b solving 306090 c solving 454590 30 60 90 right triangles and algebra examples infinite geometry name period right triangles answer keys to special right triangles unit 8 right triangles name per. Click on the headings to find blank worksheets (if applicable) or click on the notes/answer key links to view what we have gone over in class. Showing top 8 worksheets in the category create the unit circle. I doubt we will have a midterm, but this stuff would help you study for the final if we have one of those. Degree measure of angle is based upon the in a circle and radian measure is based upon as another way to describe one complete circle. Range of Sine and Cosine: [- 1 , 1] Since the real line can wrap around the unit circle an infinite number of times, we can extend the domain values of t outside the interval [,02 π]. H r nMza Sd4e V jw wiWtYhN bI8n uf6i 4n fi Ktje i NGAe0oVmfe5tor Fyo. Day 2 - Textbook page 318 Homework - Answer Key. Essay writing with a little help from my good should be abolished essay unit types of expository essay writing units homework more than anything else. Unit 2 graphing trigonometric functions WS 3-A answer key evaluating with unit circle. There are four versions included. You must show work that supports your. Unit 1: Prerequisites Review of Exponents Polynomial Operations HW: Worksheet Baseline and Calcs Factoring DOTS, GCF, Trinomial a=1 HW: Day 1 Factoring AC method HW: Day 2 Factor Completely and Sum and Difference of Perfect Cubes HW: Day 3 Solving Quadratics HW: Worksheet Extra Factoring Practice HW: Worksheet Long Division Long Division Day 2. Write the answers. with answers. q 7 4 S radians 5. Hack your grades and get the answer keys you need with the information below. Using the Unit Circle and Arc Length to see Radians. Use your calculator's value of πππ. y 0 tAplelJ brLi Pg nh7tPsR pr 9e 3sde Fr JveTd e. IXL Answers (All Grades and Books) Collection of IXL answers submitted by students. Answer Key for Practice Worksheet 9-5 File Review for quiz on 9-1, 9-2, 9-3, and 9-5 File Video for lesson 9-6: Angles formed inside a circle but not at the center URL. unit circle worksheet ALL Mr. Circle whether each answer is True or False. Plus model problems explained step by step. Telling Time To The 5 Minutes Worksheets, Chemistry Nomenclature Worksheet Answers, Electron Configuration Practice Chemistry Worksheet Answers, Noun Picture Sort Worksheet, Erosional Forces Worksheet, High Frequency Words 2nd Grade Worksheets, Reordering Sentences Worksheets, Physical Vs Chemical Changes Worksheet Answer Key, Phet Projectile. This kind of worksheets makes geographical measurement easy than ever. 628 ft Practice B 2. - You will also find educational videos that can help reinforce the content learned during class. First person b. For example: If a student earns $8. The tax rate was 9. 5-1-1 Radius=1 1 cos à,sin à Unit Circle with Degrees • Will deal with special angles o Multiples of 30°,45°,60° and 90°. 1-16 HW : Basic solving Trig odd optional hw/extra practice: Even on Basic Trig and WS 3A. What color is the box in which you found these vocabulary words? 4. Introduction to the unit circle | Trigonometry | Khan Academy Extending SOH CAH TOA so that we can define trig functions for a broader class of angles. y 0 tAplelJ brLi Pg nh7tPsR pr 9e 3sde Fr JveTd e. A chord is a line segment that connects two points on the circle. Unit Review 2 - Answer Key. Write the number of the Circle the adverb that describes the verb. Unit Circle Practice Worksheet Beautiful Unit Circle Questions with Answers Trigonometry Unit Scarlett Saunders Customize Design Worksheet Online Geometry Formulas, Math Formulas, Math Help, Fun Math, Maths Solutions, Physics And Mathematics, Math Notes, Precalculus, Math About Me. Rachel Carson A Fable. Find the exact value of the trig ratio without using a calculator. Here is your Free Content for this Lesson! Angles and the Unit Circle Worksheet - Word Docs & PowerPoints. 1] 4] 7] 10] 2] 8] 5] yellow black red purple light green light green dark green yellow red dark blue pink orange orange brown dark blue 3] 6] 9]. Displaying top 8 worksheets found for - Circle Review Answer Key. These multiplication worksheets are great to test your students understanding of the multiplication times tables. 3: Pearson Practice. Circumference and Area of Circles Date_____ Period____ Find the area of each. 13 -3 Radian Measure Obiecfives To use radian measure for angles To find the length of an arc of a circle. biology anchor charts | Cladogram Worksheet Answer Key Welcome to Grade 11 Biology! UNIT ONE Click below for the unit package: Unit One - Biochem Package For a review of chemistry, "biology-style" click below: The Biology Project - Chemistry Review For future reference of the macromolecule powerpoint used in class, click below. 1 Scientific Processes 1. unit conversion word problems worksheet with answers Tag 3rd grade eog math practice worksheets converting units of measurement mystery multiplication sheets exercises for 1 word problems adding and subtracting fractions parcc pdf metric worksheet 5th conversion answer key system - hockeyofficialauthentic. True False 17. 2 Filled in Notes (for class) Unit Circle Worksheet. Algebra 1 Worksheets. OUTPUT on the unit circle is the value of 1, the lowest value of OUTPUT is -1. Unit Circle Worksheets Math Worksheets 4 Kids; The Unit Circle Free Printable Math Worksheets; Unit Circle Worksheet With Answers Find Angle Based On; Unit Circle Practice Worksheets Lesson Worksheets; Unit 6 Worksheet 4 Using The Unit Circle Answer Key; Unit Circle W Everything Charts Worksheets 35 Examples. 10/29 A-day. 1) 12 in 2) 14 km 3) 9 m 4) 11 cm 5) radius = 2. Use these as two minute timed addition tests, which are great at-home practice even if your school is only doing one minute tests. subtends an arc length equal to 1 radius. Unit Circle Worksheet with Answers. When we talk concerning Trigonometry Worksheets and Answers PDF, below we will see several variation of images to complete your ideas. Once you find your worksheet, click on pop-out icon or print icon to worksheet to print. Place the coordinates of each point in the ordered pairs outside the circle. Preview this quiz on Quizizz. Test and Worksheet Generators for Math Teachers. 79 General Mils Cheerios 14 ounces$3. Printable Blank Unit Circle. You can generate the worksheets either in html or PDF format — both are easy to print. Finding an arc length when the angle is given in degrees We know that if θ is measured in radians, then the length of an arc is given by s = rθ. The answer key is automatically generated and is placed on the second page of the file. 1 Arc Length, Area of a Sector - Click HERE Notes - 7. Here is your free content for this lesson! The Unit Circle - Hand Trick equations maze dodge the monsters solving equations maze pdf solving equations maze worksheet solving equations maze worksheet answer key solving one step equations maze pdf Solving Proportions Solving Systems by Graphing The. Day 2 - Graphing. Unit 6 - Chapter 6. Select one of the links below to get started. They had mastered the Unit Circle and I wanted to have them use the unit circle to have them think about some of the basic identities (odd, even, co-function etc. The collection is licensed under a Creative Commons Attribution-ShareAlike 4. Maria is looking for three fractions equivalent to1 4. If you choose to use a calculator, be sure it is permitted, is working on test day, and has reliable batteries. Geometry Worksheets Angles Worksheets for Practice and Study. Dictionary Practice. a point (x, y) on the circle. LT3: Ohm's Law. Practice thousands of K-12 Math and Science concepts and assignments for on CK-12. Chemistry Mainpage - ScienceGeek. We provide Unit Circle Worksheet With Answers and numerous book collections from fictions to scientific research in any way. H r nMza Sd4e V jw wiWtYhN bI8n uf6i 4n fi Ktje i NGAe0oVmfe5tor Fyo. practice workbook answers Holt, Rinehart and Winston Algebra 1 algebra with pizzazz answer key worksheet 97 square roots and radicals worksheets grade7mathratioworksheets addition and subtraction expression quadratic equation ti-83 unit circle on a ti89. This Blog is create for anyone, we don't charge. Math Worksheets Trigonometry web page as a pdf with answer key Maths trigonometry trig ratios on the unit circle worksheet worksheets Free. In topology, it is often denoted as S 1 because it is a one-dimensional unit n-sphere. The radius of the circle is also the hypotenuse of the right triangle and it is equal to 1. Circle C3 is tangent to C1 and C2. Click here to preview the answers for this assignment. Circle C3 is tangent to C1 and C2. 22 - PhET Interactive Simulations. Unit Circle Worksheet and Answer Key - mathwarehouse. 60 Addition Worksheets. Metric mania length answer key worksheets kiddy math metric mania worksheets printable worksheets metric mania worksheets kiddy math mobi metric mania conversion. mass extinction 2. HINT: Use a triangle instead of a unit circle. Read each set of guide words. Practice thousands of K-12 Math and Science concepts and assignments for on CK-12. You will then need to apply a certain trigonometric. Play this game to review Algebra II. Answers, Staar Bright Reading Answer Key 4th, Othello Reading Guide Answers, Ready New York Ccls Answers Grade 7, Springboard Geometry Getting Ready Unit 2 Answers, a new deal fights the depression guided reading, Anthropology Of Language Workbook Reader Answer Key, Chapter 19 Section 1 Guided Reading Postwar America Answer. Maybe you have knowledge that, people have search hundreds times for their chosen novels like this math 36 unit circle worksheet answers, but end up in malicious downloads. Here is your Free Content for this Lesson! Angles and the Unit Circle Worksheet - Word Docs & PowerPoints. 1 Understand radian measure of an angle as the length of the arc on the unit circle subtended by the angle. 5a notes powerpoint. Read them and answer them format. Unit Circle 17. 1 B - Key; Special Right Triangles Worksheet - Key; 10. Midterm Review. 5 Assignments. Notes 2B Flowchart and Paragraph Proofs Key 2; Notes 3 Unit 2B fill in the blank proofs; Notes 4 Whole Proof Key; WS 1 Reasons and Conclusions Key; WS 2 Flowchart and Paragraph Proofs Key; WS 3 Fill in the blank Key; WS 4 More Reasons and Conclusions Key; WS 5 More Practice Segment and Angle Proofs Key; WS 6 Geometry Short Proofs Key. 9 - UNIT 4 TEST. printable worksheets > trigonometry worksheets. UNIT 6 WORKSHEET 8 USING THE UNIT CIRCLE 00 1200 135 fin 2250 fin 2700 45 330 3150 fin Use the unit circle above to find the exact value of each of the following. 13 -3 Radian Measure Obiecfives To use radian measure for angles To find the length of an arc of a circle. About This Quiz & Worksheet. 2 km 8) radius = 29. Since this is a square, multiply 6. What is the exact coordinates of 3π/4 on the unit circle?. Two figures that have the same shape are said to be similar. A : Geometry 19. The focus of Second-Grade Math Minutes is math fluency—teaching students to solve problems effortlessly and rapidly. sin(150 ) 2) cos 3) sin 5 6 4) cos 5) tan 9 6 6) tan(135 ) 1) 7) sin 3 7 6 135 8) cos 120 Unit Circle Worksheet C Name_____ Period_____ The given point P is located on the Unit Circle. 1) -2900 3) 9700 5) 5100 7) 2100 2400 Il) -9450 13) 3150 15) -5200 17) 3000 19) 5550 Convert each radian measure into degrees. Week 1: Unit circle art project Create a unit circle art project which labels both radians and degrees in 15 degree increments. To get the PDF worksheet, simply push the button titled "Create PDF" or "Make PDF worksheet". unit circle worksheet ALL Mr. B and C are points on the circumference such that. Unit 3 - Right Triangle Trigonometry. Find these values without using a calculat. If you're behind a web filter, please make sure that the domains *. Unit 1 HW plus resources. Determine reference angles for angles given in radians or degrees 3. This often translates into lots of questions and problems and opportunities for students to practice. Pronouns Worksheet - Introduction to pronouns. 6 B - Key; 7. The unit circle has a radius of one, so the hypotenuse is 1. accompanied by them is this Unit Circle Worksheet With Answers that can be your partner. The answers for these pages appear at the back of this booklet. Use your calculator's value of πππ. q 7 4 S radians 5. The angle whose terminal side passes through is in the _____ quadrant. Print or download, these trig worksheets are perfect for homework or just some extra practice before the big exam. Worksheets are Degrees radians conversion practice date, Trigonometry review with the unit circle all the trig, Unit circle trigonometry, Hf complete the unit circle with degrees and radians, Math 175 trigonometry work, Maths learning service revision basic trigonometry and, Answer key questions from. Recommendations for Students and. Unit Rates and Ratios of Fractions Independent Practice Worksheet Answers. Worksheets are The unit circle, Math 175 trigonometry work, Station 2 work the unit circle, Unit circle ws and key, The unit circle, Fill in the unit circle positive negative positive, Positive sin csc negative cos tan the unit circle sec, Find the exact value of each trigonometric. Redox Reaction Grade 9. Yehlen's Trig Values of The Unit Circle Worksheet Video Answer Key Unit Circle Practice worksheet Help help with the worksheet from week 2. Test and Worksheet Generators for Math Teachers. q 7 4 S radians 5. Find quadrants of angles in standard position. Stating the domain of functions - ANSWERS. Circle C3 is tangent to C1 and C2. THE UNIT CIRCLE Practice A 1. Determine the coordinate for the specific trig function given in radians. sin(150 ) 2) cos 3) sin 5 6 4) cos 5) tan 9 6 6) tan(135 ) 1) 7) sin 3 7 6 135 8) cos 120 Unit Circle Worksheet C Name_____ Period_____ The given point P is located on the Unit Circle. Unit Review 2 - Answer Key. Detective - Worksheet Answer Key. Free Precalculus worksheets created with Infinite Precalculus. The angles on the unit circle can be in degrees or radians. Notes/Worksheets. Algebra 2 Practice 13 3 Answers. Students will read and interpret data, as well as make graphs of their own. The figure here shows all the measurements of the unit circle:. com Free worksheet(pdf) and answer key on Unit Circle. ‪Trig Tour‬ 1. Unit circle practice part 1 worksheet free printables mathvox unit circle practice worksheets worksheet 1 unit 1 unit circle docs unit circle practice part 1. Most of the houses have kids; these circle templates can be provided to them for better understanding of shapes. Precalculus: A Unit Circle Approach with Integrated Review plus MyLab Math with Pearson eText and Worksheets -- 24-Month Access Card Package, 3rd Edition J. Even unfriendly-looking problems like this can be solved using the PROPORTION METHOD: EXAMPLE #4: Find 83 2/3 % of 12. Flippy Do Template - Worksheet Binary Practice - Activity Guide Scissors (many pairs). chapter 30 section 1 guided reading moving toward conflict, Dont Read In The Closet Volume Two Ebook Blaine D Arden. Introduction to the unit circle | Trigonometry | Khan Academy Extending SOH CAH TOA so that we can define trig functions for a broader class of angles. Degree measure of angle is based upon the in a circle and radian measure is based upon as another way to describe one complete circle. The Unit Circle Worksheet Answers Kinis Rsd7 Org. Let's define them. Calculus Precalculus: Mathematics for Calculus (Standalone Book) Points on the Unit Circle Find the missing coordinate of P , using the fact that P lies on the unit circle in the given quadrant. Displaying all worksheets related to - Unit Circle Radian Measure. View, download and print Circumference Of Circles Worksheet With Answer Key pdf template or form online. Possible response: I used the circle pieces to H. Test and Worksheet Generators for Math Teachers. Practice thousands of K-12 Math and Science concepts and assignments for on CK-12. Some of the worksheets displayed are Unit circle handout, Math 175 trigonometry work, Positive sin csc negative cos tan the unit circle sec, Unit circle trigonometry, Unit circle, Fill in the unit circle positive negative positive, Find the exact value of each trigonometric, Station 2 work the unit circle. 0 Station #2: WORK the Unit Circle Draw unit circle as fast as you can. unit 6 worksheet 17 amplitude, period, phase shift and initial interval UNIT 6 WORKSHEET 17 AMP PER PS II. 6 B - Key; 10. Showing top 8 worksheets in the category - Math For 6th Grade With Answer Key. This is an online quiz called UNIT CIRCLE QUIZ COORDINATES ONLY There is a printable worksheet available for download here so you can take the quiz with pen and paper. LT3: Ohm's Law Worksheet--PRACTICE PROBLEMS. Draw arrows on the picture to show the path of the electricity. 2 km 8) radius = 29. 2 Practice Answers 4. Proper Nouns - Circle the proper nouns in each sentence. Precalculus: A Unit Circle Approach with Integrated Review plus MyLab Math with Pearson eText and Worksheets -- 24-Month Access Card Package, 3rd Edition J. I always take a bath before I go to bed. Click "Make Answer Key" button to generate a printable answer key Click "Make Both" button to generate both worksheet(s) and answer key(s) in one step For now, choices are limited to lessons that deal primarily with computation, and the lessons end at PreAlgebra Lesson 17. Adjectives worksheet 2 - Circle the adjectives found in each sentence. The area of a square is 169 cm 2. The unit circle worksheets are intended to provide practice in using the unit circle to find the coordinates of a point on the unit circle, find the corresponding angle measure, determine the six trigonometric ratios and a lot more. accompanied by them is this Unit Circle Worksheet With Answers that can be your partner. LT1: Charge Conservation. ( x – 7 ) 2 + ( y – 9 ) 2 = 25 = 5 2. (a) (b) Write down the value of angle (i) BCD Calculate angle ABC. Printable Blank Unit Circle. Circle Equations Harder Example. Name That Circle Part Worksheet Answers All Things Algebra. L'Hopital's Rule & Limit Definition of Derivative (blank notes) L'Hopital & Limit Def of Deriv (complete notes) Workbook p. Yehlen's Trig Values of The Unit Circle Worksheet Video Answer Key Coterminal Angles - Positive and Negative, Converting Degrees to Radians, Unit Circle, Trigonometry This trigonometry video tutorial explains how to find a positive and a negative coterminal angle given another angle in degrees or Unit Circle Practice Worksheet Explanation. List five important vocabulary words for section 2. Sam accidentally slipped on the ice. Day 3 - The Unit Circle - Notes. Chapter 6 Resource Masters The Fast File Chapter Resource system allows you to conveniently file the resources you use most often. doc Author: Susan_Lauffer Created Date: 2/3/2011 6:50:34 PM. Write the answers. 2 Review (Geometry 235) Sep 11 Geometry 236 Ch. If you're behind a web filter, please make sure that the domains *. I carefully glued the last piece onto the model. Human Footprint Worksheet Answer Key. Here are three isotopes of an element: 12C 14 13C C a. The trigonometric ratio of sin 45, cos 45, and tan 45 are equal. Displaying all worksheets related to - Unit Circle Practice. 3 Day 4: Project. UNIT TEST ANALYSIS Unit 1 Test Student Self-Assessment First, you should complete your self assessment. Basic Solving Trig Practice w/answers. Yehlen's Trig Values of The Unit Circle Worksheet Video Answer Key Unit Circle Practice worksheet Help help with the worksheet from week 2. How to Graph the Sine, Cosine and Tangent Functions using the Unit Circle? The following diagram shows the unit circle and the trig graphs for sin, cos, and tan. 1 Radian and Degree Measure p. Name: Answer Key Class: Gr. An arc is a curved part of the circle. You can take notes in the margins or on the flip-side of each sheet. 2) Divide by “ a ”, so the coefficient of x2 is 1. This unit is divided into 3 equal parts. WORD DOCUMENT. Unit circle plays a vital role in trigonometry. Equation Of A Circle Worksheet Kids Activities. National Standards. Use the triangles above to state the EXACT VALUE of the trig functions WITHOUT using a calculator. Students will read and interpret data, as well as make graphs of their own. Title Test Test Answer Key Worksheet Answer Key; L. Then they will practice using singular, plural, and collective nouns in different contexts. - Get ready access to thousands of grade appropriate practice questions and lessons - Find information about nearby schools, libraries, school supply stores, conferences and bookstores. Unit 3 - Right Triangle Trigonometry. For this unit circle worksheet, students answer 90 short answer questions about the unit circle. Unit 6 worksheet 7 using the unit circle use the unit circle above to find the exact value of each of the following. ehcaq2586h7y tsfovr34dq9 tpw37yegmulypsp bgmbpax6k2e t4zpb6aj6gs95xt peeuh09x0jr22u b7nnborsg4b0i k3yb0jx4nbo04 h1rmkuebh6a d45ujw2dim4j 38mu8jkoxhbdvl3 z19d6u2xn48 xi6wfe7zvz7 a9knb9l6bpom morlixwlavxh kynvxwx8a1h qj5s4yfhz03w vqcrz18aznxjewl zgjiifo8kgn bhcfhuqajk43 9c3snhzryu8d23k 0abq2t95sk6lyfa lelhv5dwoy57 544jjsl0ncq2 gqd8sph8pykln6 usivka1zmump9n g982rj88sb7 s48rrqjl477gx oukw48eo1wfc3q slsavyhp907cm9z tnlpnrkpcs1ncv eygwv5foi97 m9nvtwouk6h9 5h9tzm4kh0xqpvx uhxkd17qwaaqre0
proofpile-shard-0030-351
{ "provenance": "003.jsonl.gz:352" }
💬 👋 We’re always here. Join our Discord to connect with other students 24/7, any time, night or day.Join Here! ## Educators WZ ### Problem 1 Evaluate the integral using integration by parts with the indicated choices of $u$ and $dv$. $\displaystyle \int xe^{2x}$ ; $u = x$ , $dv = e^{2x} dx$ SL Sky L. ### Problem 2 Evaluate the integral using integration by parts with the indicated choices of $u$ and $dv$. $\displaystyle \int \sqrt{x} \ln x dx$ ; $u = \displaystyle \ln x$ , $dv = \sqrt{x} dx$ DQ Danjoseph Q. ### Problem 3 Evaluate the integral. $\displaystyle \int x \cos 5x dx$ DQ Danjoseph Q. ### Problem 4 Evaluate the integral. $\displaystyle \int ye^{0.2y} dy$ SL Sky L. ### Problem 5 Evaluate the integral. $\displaystyle \int te^{-3t} dt$ DQ Danjoseph Q. ### Problem 6 Evaluate the integral. $\displaystyle \int (x - 1) \sin \pi x dx$ DQ Danjoseph Q. ### Problem 7 Evaluate the integral. $\displaystyle \int (x^2 + 2x) \cos x dx$ WZ Wen Z. ### Problem 8 Evaluate the integral. $\displaystyle \int t^2 \sin \beta t dt$ Ma. Theresa A. ### Problem 9 Evaluate the integral. $\displaystyle \int \cos^{-1} x dx$ WZ Wen Z. ### Problem 10 Evaluate the integral. $\displaystyle \int \ln \sqrt{x} dx$ WZ Wen Z. ### Problem 11 Evaluate the integral. $\displaystyle \int t^4 \ln t dt$ WZ Wen Z. ### Problem 12 Evaluate the integral. $\displaystyle \int \tan^{-1} 2y dy$ WZ Wen Z. ### Problem 13 Evaluate the integral. $\displaystyle \int t \csc^2 t dt$ WZ Wen Z. ### Problem 14 Evaluate the integral. $\displaystyle \int x \cosh ax dx$ WZ Wen Z. ### Problem 15 Evaluate the integral. $\int \frac{x-1}{x^{2}+2 x} d x$ Anthony H. ### Problem 16 Evaluate the integral. $\displaystyle \int \frac{z}{10^z} dz$ WZ Wen Z. ### Problem 17 Evaluate the integral. $\displaystyle \int e^{2 \theta} \sin 3 \theta d \theta$ WZ Wen Z. ### Problem 18 Evaluate the integral. $\displaystyle \int e^{-\theta} \cos 2 \theta d \theta$ WZ Wen Z. ### Problem 19 Evaluate the integral. $\displaystyle \int z^3 e^z dz$ WZ Wen Z. ### Problem 20 Evaluate the integral. $\displaystyle \int x \tan^2 x dx$ WZ Wen Z. ### Problem 21 Evaluate the integral. $\displaystyle \int \frac{xe^{2x}}{(1 + 2x)^2} dx$ WZ Wen Z. ### Problem 22 Evaluate the integral. $\displaystyle \int (\arcsin x)^2 dx$ WZ Wen Z. ### Problem 23 Evaluate the integral. $\displaystyle \int_0^{\frac{1}{2}} x \cos \pi x dx$ WZ Wen Z. ### Problem 24 Evaluate the integral. $\displaystyle \int_0^1 (x^2 + 1) e^{-x} dx$ WZ Wen Z. ### Problem 25 Evaluate the integral. $\displaystyle \int_0^2 y \sinh y dy$ Willis J. ### Problem 26 Evaluate the integral. $\displaystyle \int_1^2 w^2 \ln w dw$ Anthony H. ### Problem 27 Evaluate the integral. $\displaystyle \int_1^5 \frac{\ln R}{R^2} dR$ WZ Wen Z. ### Problem 28 Evaluate the integral. $\displaystyle \int_0^{2 \pi} t^2 \sin 2t dt$ WZ Wen Z. ### Problem 29 Evaluate the integral. $\displaystyle \int_0^\pi x \sin x \cos x dx$ Ma. Theresa A. ### Problem 30 Evaluate the integral. $\displaystyle \int_1^{\sqrt{3}} \arctan (\frac{1}{x}) dx$ WZ Wen Z. ### Problem 31 Evaluate the integral. $\displaystyle \int_1^5 \frac{M}{e^M} dM$ WZ Wen Z. ### Problem 32 Evaluate the integral. $\displaystyle \int_1^2 \frac{(\ln x)^2}{x^3} dx$ WZ Wen Z. ### Problem 33 Evaluate the integral. $\displaystyle \int_0^{\frac{\pi}{3}} \sin x \ln (\cos x) dx$ WZ Wen Z. ### Problem 34 Evaluate the integral. $\displaystyle \int_0^1 \frac{r^3}{\sqrt{4 + r^2}} dr$ WZ Wen Z. ### Problem 35 Evaluate the integral. $\displaystyle \int_1^2 x^4 (\ln x)^2 dx$ WZ Wen Z. ### Problem 36 Evaluate the integral. $\displaystyle \int_0^t e^s \sin (t - s) ds$ WZ Wen Z. ### Problem 37 First make a substitution and then use integration by parts to evaluate the integral. $\displaystyle \int e^{\sqrt{x}} dx$ WZ Wen Z. ### Problem 38 First make a substitution and then use integration by parts to evaluate the integral. $\displaystyle \int \cos (\ln x) dx$ WZ Wen Z. ### Problem 39 First make a substitution and then use integration by parts to evaluate the integral. $\displaystyle \int_{\sqrt{\frac{\pi}{2}}}^{\sqrt{\pi}} \theta^3 \cos (\theta^2) d \theta$ WZ Wen Z. ### Problem 40 First make a substitution and then use integration by parts to evaluate the integral. $\displaystyle \int_0^\pi e^{\cos t} \sin 2t dt$ WZ Wen Z. ### Problem 41 First make a substitution and then use integration by parts to evaluate the integral. $\displaystyle \int x \ln (1 + x) dx$ WZ Wen Z. ### Problem 42 First make a substitution and then use integration by parts to evaluate the integral. $\displaystyle \int \frac{\arcsin (\ln x)}{x} dx$ WZ Wen Z. ### Problem 43 Evaluate the indefinite integral. Illustrate, and check that your answer is reasonable, by graphing both the function and its antiderivative (take $C = 0$). $\displaystyle \int xe^{-2x} dx$ WZ Wen Z. ### Problem 44 Evaluate the indefinite integral. Illustrate, and check that your answer is reasonable, by graphing both the function and its antiderivative (take $C = 0$). $\displaystyle \int x^{\frac{3}{2}} \ln x dx$ WZ Wen Z. ### Problem 45 Evaluate the indefinite integral. Illustrate, and check that your answer is reasonable, by graphing both the function and its antiderivative (take $C = 0$). $\displaystyle \int x^3 \sqrt{1 + x^2} dx$ WZ Wen Z. ### Problem 46 Evaluate the indefinite integral. Illustrate, and check that your answer is reasonable, by graphing both the function and its antiderivative (take $C = 0$). $\displaystyle \int x^2 \sin 2x dx$ WZ Wen Z. ### Problem 47 (a) Use the reduction formula in Example 6 to show that $$\int \sin^2 x dx = \frac{x}{2} - \frac{\sin 2x}{4} + C$$ (b) Use part (a) and the reduction formula to evaluate $\displaystyle \int \sin^4 x dx$. Willis J. ### Problem 48 (a) Prove the reduction formula $$\int \cos^n x dx = \frac{1}{n} \cos^{n - 1} x \sin x + \frac{n - 1}{n} \int \cos^{n - 2} x dx$$ (b) Use part (a) to evaluate $\displaystyle \int \cos^2 x dx$. (c) Use parts (a) and (b) to evaluate $\displaystyle \int \cos^4 x dx$. Carson M. ### Problem 49 (a) Use the reduction formula in Example 6 to show that $$\int_0^{\frac{\pi}{2}} \sin^n x dx = \frac{n - 1}{n} \int_0^{\frac{\pi}{2}} \sin^{n - 2} x dx$$ where $n \ge 2$ is an integer. (b) Use part (a) to evaluate $\displaystyle \int_0^{\frac{\pi}{2}} \sin^3 x dx$ and $\displaystyle \int_0^{\frac{\pi}{2}} \sin^5 x dx$. (c) Use part (a) to show that, for odd powers of sine, $$\int_0^{\frac{\pi}{2}} \sin^{2n + 1} x dx = \frac{2 \cdot 4 \cdot 6 \cdots \cdots 2n}{3 \cdot 5 \cdot 7 \cdots \cdots (2n +1)}$$ Willis J. ### Problem 50 Prove that, for even powers of sine, $$\int_0^{\frac{\pi}{2}} \sin^{2n} x dx = \frac{1 \cdot 3 \cdot 5 \cdots \cdots (2n - 1)}{2 \cdot 4 \cdot 6 \cdots \cdots 2n} \frac{\pi}{2}$$ WZ Wen Z. ### Problem 51 Use integration by parts to prove the reduction formula. $\displaystyle \int (\ln x)^n dx = x (\ln x)^n - n \displaystyle \int (\ln x)^{n - 1} dx$ WZ Wen Z. ### Problem 52 Use integration by parts to prove the reduction formula. $\displaystyle \int x^n e^x dx = x^n e^x - n \displaystyle \int x^{n - 1} e^x dx$ WZ Wen Z. ### Problem 53 Use integration by parts to prove the reduction formula. $\displaystyle \int \tan^n x dx = \frac{\tan^{n -1} x}{n - 1} - \displaystyle \int \tan^{n -2} x dx (n \neq 1)$ WZ Wen Z. ### Problem 54 Use integration by parts to prove the reduction formula. $\displaystyle \int \sec^n x dx = \frac{\tan x \sec^{n - 2} x}{n - 1} + \frac{n - 2}{n - 1} \displaystyle \int \sec^{n - 2} x dx (n \neq 1)$ WZ Wen Z. ### Problem 55 Use Exercise 51 to find $\displaystyle \int (\ln x)^3 dx$. WZ Wen Z. ### Problem 56 Use Exercise 52 to find $\displaystyle \int x^4 e^x dx$. WZ Wen Z. ### Problem 57 Find the area of the region bounded by the given curves. $y = x^2 \ln x$ , $y = 4 \ln x$ WZ Wen Z. ### Problem 58 Find the area of the region bounded by the given curves. $y = x^2 e^{-x}$ , $y = xe^{-x}$ WZ Wen Z. ### Problem 59 Use a graph to find approximate x-coordinates of the points of intersection of the given curves. Then find (approximately) the area of the region bounded by the curves. $y = \arcsin \left(\frac{1}{2} x \right)$, $y = 2 - x^2$ WZ Wen Z. ### Problem 60 Use a graph to find approximate x-coordinates of the points of intersection of the given curves. Then find (approximately) the area of the region bounded by the curves. $y = x \ln (x + 1)$ , $y = 3x - x^2$ WZ Wen Z. ### Problem 61 Use the method of cylindrical shells to find the volume generated by rotating the region bounded by the curves about the given axis. $y = \cos (\frac{\pi x}{2})$ , $y = 0$ , $0 \le x \le 1$ ; about the y-axis WZ Wen Z. ### Problem 62 Use the method of cylindrical shells to find the volume generated by rotating the region bounded by the curves about the given axis. $y = e^x$ , $y = e^{-x}$ , $x = 1$ ; about the y-axis WZ Wen Z. ### Problem 63 Use the method of cylindrical shells to find the volume generated by rotating the region bounded by the curves about the given axis. $y = e^{-x}$ , $y = 0$ , $x = -1$ , $x = 0$ ; about $x = 1$ WZ Wen Z. ### Problem 64 Use the method of cylindrical shells to find the volume generated by rotating the region bounded by the curves about the given axis. $y = e^x$ , $x = 0$ , $y = 3$ ; about the x-axis WZ Wen Z. ### Problem 65 Calculate the volume generated by rotating the region bounded by the curves $y = \ln x$, $y = 0$ and $x = 2$ about each axis. (a) The y-axis (b) The x-axis WZ Wen Z. ### Problem 66 Calculate the average value of $f(x) = x \sec^2 x$ on the interval $[0, \frac{\pi}{4}]$. WZ Wen Z. ### Problem 67 The Fresnel function $S(x) = \displaystyle \int_0^x \sin \left(\frac{1}{2} \pi t^2 \right) dt$ was discussed in Example 5.3.3 and is used extensively in the theory of optics. Find $S(x) dx$. [Your answer will involve $S(x)$.] Carson M. ### Problem 68 A rocket accelerates by burning its onboard fuel, so its mass decreases with time. Suppose the initial mass of the rocket at liftoff (including its fuel) is $m$, the fuel is consumed at rate $r$, and the exhaust gases are ejected with constant velocity $v_e$ (relative to the rocket). A model for the velocity of the rocket at time $t$ is given by the equation $$v(t) = -gt - v_e \ln \frac{m - rt}{m}$$ where $g$ is the acceleration due to gravity and t is not too large. If $g = 9.8 m/s^2$, $m = 30,000 kg$, $r = 160 kg/s$, and $v_e = 3000 m/s$, find the height of the rocket one minute after liftoff. Carson M. ### Problem 69 A particle that moves along a straight line has velocity $v(t) = t^2 e^{-t}$ meters per second after $t$ seconds. How far will it travel during the first $t$ seconds? WZ Wen Z. ### Problem 70 If $f(0) = g(0) = 0$ and $f^n$ and $g^n$ are continuous, show that $$\int_0^a f(x) g^{\prime\prime} (x) dx = f(a)g^\prime(a) - f^\prime(a)g(a) + \int_0^a f^{\prime\prime} (x) g(x) dx$$ WZ Wen Z. ### Problem 71 Suppose that $f(1) = 2$, $f(4) = 7$, $f^\prime(1) = 5$, $f^\prime(4) = 3$ and $f^{\prime\prime}$ is continuous. Find the value of $\displaystyle \int_1^4 x f^{\prime\prime} (x)\ dx$. Ma. Theresa A. ### Problem 72 (a) Use integration by parts to show that $$\int f(x) dx = xf (x) - \int xf^\prime (x) dx$$ (b) If $f$ and $g$ are inverse functions and $f^\prime$ is continuous, prove that $$\int_a^b f(x) dx = bf (b) - af (a) - \int_{f(a)}^{f(b)} g(y) dy$$ [Hint: Use part (a) and make the substitution $y = f(x)$.] (c) In the case where $f$ and $g$ are positive functions and $b > a > 0$, draw a diagram to give a geometric interpretation of part (b). (d) Use part (b) to evaluate $\displaystyle \int_1^e \ln x dx$. WZ Wen Z. ### Problem 73 We arrived at Formula 6.3.2, $V = \displaystyle \int_a^b 2 \pi x f(x) dx$, by using cylindrical shells, but now we can use integration by parts to prove it using the slicing method of Section 6.2, at least for the case where $f$ is one-to-one and therefore has an inverse function $g$. Use the figure to show that $$V = \pi b^2 d - \pi a^2 c - \int_c^d \pi [g(y)]^2 dy$$ Make the substitution $y = f(x)$ and then use integration by parts on the resulting integral to prove that $$V = \int_a^b 2 \pi x f(x) dx$$ WZ Wen Z. ### Problem 74 Let $I_n = \displaystyle \int_0^{\frac{\pi}{2}} \sin^n x dx$. (a) Show that $I_{2n + 2} \le I_{2n + 1} \le I_{2n}$. (b) Use Exercise 50 to show that $$\frac{I_{2n + 2}}{I_{2n}} = \frac{2n + 1}{2n + 2}$$ (c) Use parts (a) and (b) to show that $$\frac{2n + 1}{2n + 2} \le \frac{I_{2n + 1}}{I_{2n}} \le 1$$ and deduce that $\lim_{n\to\infty}\frac{I_{2n + 1}}{I_{2n}} = 1$. (d) Use part (c) and Exercises 49 and 50 to show that $$\lim_{n\to\infty} \frac{2}{1} \cdot \frac{2}{3} \cdot \frac{4}{3} \cdot \frac{4}{5} \cdot \frac{6}{5} \cdot \frac{6}{7} \cdots \cdots \frac{2n}{2n - 1} \cdot \frac{2n}{2n + 1} = \frac{\pi}{2}$$ This formula is usually written as an infinite product: $$\frac{\pi}{2} = \frac{2}{1} \cdot \frac{2}{3} \cdot \frac{4}{3} \cdot \frac{4}{5} \cdot \frac{6}{5} \cdot \frac{6}{7} \cdots$$ and is called the Wallis product. (e) We construct rectangles as follows. Start with a square of area 1 and attach rectangles of area 1 alternately beside or on top of the previous rectangle (see the figure). Find the limit of the ratios of width to height of these rectangles. WZ Wen Z.
proofpile-shard-0030-352
{ "provenance": "003.jsonl.gz:353" }
# Tag Info 38 Using the brakes on the front of the bike causes your weight to shift forward. Additional weight allows more force before the tire will slip (skid). If you brake hard enough the back tire of your bike will lift up and at that point all of the mass is distributed on the front tire. Remember the maximum force is $F_{max} = \mu F_{normal}$ and $F_{normal}$ ... 5 The braking force acts between the tyre and the road. The centre of mass is above this point so there is a rotational effect which increases the force going down through the front tyre and decreases the force going down through the rear tyre. Because the amount of braking force the tyre is able to produce is limited by the amount of force going down through ... 4 On most bicycles, your center of gravity is not halfway between the front and back wheel - it is closer to the back wheel (image source: http://www.esquire.com/cm/esquire/images/d1/bike-080210-lg.jpg). This means that the back wheel carries more of the weight. Now assuming that you inflate the tires with the same pressure, this means that the contact ... 4 There are some good reasons why you should not take a sharp turn at high speeds. 1) On a flat road, the force of static friction is what provides the centripetal force to accelerate you through a curve. Unfortunately, there is a maximum value for static friction that depends greatly on the mass of the vehicle. The heavier it is, the more static friction you ... 4 Problem: Given Newton's second law $$\tag{1} m\ddot{q}^j~=~-\beta\dot{q}^j-\frac{\partial V(q,t)}{\partial q^j}, \qquad j~\in~\{1,\ldots, n\},$$ for a non-relativistic point particle in $n$ dimensions, subjected to a friction force, and also subjected to various forces that have a total potential $V(q,t)$, which may depend explicitly on time. I) ... 3 As you seem (correctly) to understand, your free body diagram for the car should be as in the left of the drawing below. The force summation does not close to a polygon. You know neither $F_N$ nor $F_F$, the components of the force on the car from the road. But you know their directions ($\theta$ is the banking angle) and you know that all the forces must ... 3 Theoretical Answer Consider that you travelled in your car at $10~{km}/{h}$ for one hour, then at $100~{km}/{h}$ for the next hour. First hour of the Journey: You travelled a distance of 10km, so the work done is $$W=F\times s=10F$$ Second hour of the journey: You travelled a distance of 100km, so the work done is $$W=F\times s=100F$$ The work done is ... 3 You are wrong at assuming constant friction. Rolling Friction increases when you increase speed of the car (See the formulae at the bottom). Also, aerodynamic drag increases with the square of speed (See the formula at the bottom). So, at higher speed, the car engine needs to counter higher rolling friction and air drag to maintain that speed. While the ... 3 In a perfect vacuum, on a frictionless road, you could just turn off the engine and the car would keep moving, never slowing down. However, in the real world, there are several effects that exert a force on a moving car, slowing it down, such as: rolling drag between the tires and the road surface, fluid drag from the air that the car moves through, and ... 3 Lots of excellent answers here, but for fun, lets think about this backwards. Imagine you have the worlds first and only FRONT wheel drive motorcycle, and your rev it up and pop the clutch. What kind of launch do you think you would get with very little weight on the front tire? The reverse is true during braking when the deceleration shifts the weight of ... 3 The theory of friction that is described in the source is the Prandtl-Tomlinson model. I'll explain it in two steps to answer your two main questions. Q1: What is meant by "partial irreversibility of the bonding force?" All bonding, including the bonding responsible for friction, is due to electrostatic attraction between atoms. Here is what that looks ... 3 Braking acts to stop the front tire. Friction acts at the contact patch under the front wheel to introduce a vector force directed towards the back of the bike. Since the force is not directed through the center of mass of the motorcycle/rider system, it introduces a moment or torque that acts to rotate the motorcycle and rider such that the back tire begins ... 3 Actually drag is NOT completely the friction between the object and fluid. Sometimes there can be almost no friction, but still highly significant drag. In a non viscous fluid, there is no friction between the object and the fluid, but there IS still drag. The phenomenon of ram pressure transfers momentum losslessly between the object and the fluid without ... 2 I suspect this question is rooted in the widespread misconception that some external force is needed to keep the Earth rotation. That is not the case. Angular momentum is a conserved quantity. A rotating object that is not subject to any external torques will rotate with a constant angular momentum. This is the rotational analog of Newton's first law. An ... 2 Dynamic friction is constant, it doesnt change with speed. That is why the trick works. If you pull the cloth fast enough, the friction force will act for such a short time that it will not be enough to pull the stuff above it. In the case of the weels, the frictions force is also constant, but you make it last longer per unit of lenght because the wheels ... 2 The rate of temperature change will be the power per unit mass times the specific heat. So if you have a certain mass of water $M$ flowing per second, at a velocity $v$, losing $\Delta P$ pressure per second, then work done is $v\Delta P A$ and $A = \frac{M}{\rho v}$ . Then with a heat capacity $c$ (about 4.2 kJ/kg/K for water), and the relationship between ... 2 If there is not enough friction to keep the vehicle in its circular path, it will skid. The force needed for the circular path is the centripetal force: friction (the force keeping the car on the road) must be greater that that. Now the no-slip condition (centripetal force < friction) implies = $\frac{mv^2}{r} < mg\mu$ . Your equation follows by simple ... 2 When you are moving along a banked curve, three forces are at play in your frame of reference: Gravity: pulling you down; and because of the bank, pulling you inward Friction: preventing you from moving either in or out. Magnitude and direction depend on the coefficient of friction, the normal force, and the direction in which you are trying to move ... 2 For unlubricated friction, the simplistic model is quite good when the surfaces are flat and macroscopic deformation can be neglected. There are three things in particular I would like to elaborate on. First - the question of the equation of motion that you wrote down. The force of friction, at a microscopic level, is actually a consequence of the breaking ... 2 Well, for starters, centrifugal force is just a pseudo force. Get it? Let me make it a bit simpler.. See, Centripetal force, which is mv^2/r is acting towards the center and centrifugal force away from the center. I thinking you're thinking why did we use the centripetal force in case of banking of road.. right? Well, we did that because, This is nothing ... 1 Tribology (not the study of tribes!) is the study of what happens when things 'rub'. This involves friction and wear when solids rub against other solids (such as in mechanical bearings) and the effect of liquids (such as 'lubricants') and other fluids. Friction at a solid-liquid interface is still called friction. It is a 'damping' or 'dissipative' force, ... 1 So, in the first case the frictional force have the same direction with the acceleration of the center of mass but it's not in the latter one. Can someone explain the difference between those 2. In your first case, you say "to make the wheel roll faster", but you don't say how this is done. Is is pushed? Do you apply a torque to the axle? If you ... 1 The friction between a solid and liquid is a function of viscosity. The best way to answer this is with a model setup called Couette flow where a fluid sandwiched between two plates is sped up by the movement of the top plate: Image source: University of Virginia, Physics 152 taught by Michael Fowler The friction force $F$ that the fluid exerts on ... 1 You can always imagine torque as being represented by two opposing forces at either side of the object. In this case imagine the torque being generated by two forces of magnitude F at the top and bottom of the disk, the top one pointing forwards and the bottom one, backwards. In order to generate torque $\tau$, these two forces must be of magnitude ... 1 You have ignored the car friction with air! If we assume the car as an aerodynamic body (air flows on the car surface smoothly and does not separate) then the air friction on the car is proportional to the square of car speed. 1 The car is experiencing acceleration in two perpendicular directions. At the moment in question, the car is moving at the given velocity in a circle with the given radius. You have the formula for the centripetal force needed, and this is supplied by the radial force of friction. The car is also slowing down with the given acceleration. This requires a ... 1 Yes, it is correct. Kinetic friction acts when sliding only. Of course there is an instant of time where your feet slide. But you can despise that... 1 There are two separate questions here: why doesn't $F_f=\mu_k F_N$ describe the motion? (i.e. the velocity doesn't reverse at large $t$) is $F$ dependant on velocity? I can't improve on Floris' answer to (2), but it's worth making a few comments on (1). The problem is that (1) is not an equation of motion. For example both $\vec{F}_f$ and $\vec{F}_N$ are ... 1 If the applied force equals the max static friction then: $$\boldsymbol F_{applied}-\boldsymbol F_{\max \text{ friction}} = 0 = m\boldsymbol a.$$ From which it can be seen that $\boldsymbol a =0$, taking into account that the forces are opposite. Therefore the object does not move. Only top voted, non community-wiki answers of a minimum length are eligible
proofpile-shard-0030-353
{ "provenance": "003.jsonl.gz:354" }
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Trigonometric Equations Using Half Angle Formulas ( Read ) | Trigonometry | CK-12 Foundation You are viewing an older version of this Concept. Go to the latest version. # Trigonometric Equations Using Half Angle Formulas % Progress Progress % Trigonometric Equations Using Half Angle Formulas As you've seen many times, the ability to find the values of trig functions for a variety of angles is a critical component to a course in Trigonometry. If you were given an angle as the argument of a trig function that was half of an angle you were familiar with, could you solve the trig function? For example, if you were asked to find $\sin 22.5^\circ$ would you be able to do it? Keep reading, and in this Concept you'll learn how to do this. ### Guidance It is easy to remember the values of trigonometric functions for certain common values of $\theta$ . However, sometimes there will be fractional values of known trig functions, such as wanting to know the sine of half of the angle that you are familiar with. In situations like that, a half angle identity can prove valuable to help compute the value of the trig function. In addition, half angle identities can be used to simplify problems to solve for certain angles that satisfy an expression. To do this, first remember the half angle identities for sine and cosine: $\sin \frac{\alpha}{2} = \sqrt{\frac{1 - \cos \alpha}{2}}$ if $\frac{\alpha}{2}$ is located in either the first or second quadrant. $\sin \frac{\alpha}{2} = - \sqrt{\frac{1 - \cos \alpha}{2}}$ if $\frac{\alpha}{2}$ is located in the third or fourth quadrant. $\cos \frac{\alpha}{2} = \sqrt{\frac{1 + \cos \alpha}{2}}$ if $\frac{\alpha}{2}$ is located in either the first or fourth quadrant. $\cos \frac{\alpha}{2} = - \sqrt{\frac{1 + \cos \alpha}{2}}$ if $\frac{\alpha}{2}$ is located in either the second or fourth quadrant. When attempting to solve equations using a half angle identity, look for a place to substitute using one of the above identities. This can help simplify the equation to be solved. #### Example A Solve the trigonometric equation $\sin^2 \theta = 2 \sin^2 \frac{\theta}{2}$ over the interval $[0, 2\pi)$ . Solution: $\sin^2 \theta & = 2 \sin^2 \frac{\theta}{2} \\\sin^2 \theta & = 2 \left (\frac{1 - \cos \theta}{2} \right ) && \text{Half angle identity} \\1 - \cos^2 \theta & = 1 - \cos \theta && \text{Pythagorean identity} \\\cos \theta - \cos^2 \theta & = 0 \\\cos \theta (1 - \cos \theta) & = 0$ Then $\cos \theta = 0$ or $1 - \cos \theta = 0$ , which is $\cos \theta = 1$ . $\theta = 0, \frac{\pi}{2}, \frac{3\pi}{2}, \text{or } 2\pi$ . #### Example B Solve $2 \cos^2 \frac{x}{2} = 1$ for $0 \le x < 2 \pi$ Solution: To solve $2 \cos^2 \frac{x}{2} = 1$ , first we need to isolate cosine, then use the half angle formula. $2 \cos^2 \frac{x}{2} & = 1 \\\cos^2 \frac{x}{2} & = \frac{1}{2} \\\frac{1 + \cos x}{2} & = \frac{1}{2} \\1 + \cos x & = 1 \\\cos x & = 0$ $\cos x = 0$ when $x = \frac{\pi}{2}, \frac{3 \pi}{2}$ #### Example C Solve $\tan \frac{a}{2} = 4$ for $0^\circ \le a < 360^\circ$ Solution: To solve $\tan \frac{a}{2} = 4$ , first isolate tangent, then use the half angle formula. $\tan \frac{a}{2} & = 4 \\\sqrt{\frac{1 - \cos a}{1 + \cos a}} & = 4 \\\frac{1 - \cos a}{1 + \cos a} & = 16 \\16 + 16 \cos a & = 1 - \cos a \\17 \cos a & = - 15 \\\cos a & = - \frac{15}{17}$ Using your graphing calculator, $\cos a = - \frac{15}{17}$ when $a = 152^\circ, 208^\circ$ ### Vocabulary Half Angle Identity: A half angle identity relates the a trigonometric function of one half of an argument to a set of trigonometric functions, each containing the original argument. ### Guided Practice 1. Find the exact value of $\cos 112.5^\circ$ 2. Find the exact value of $\sin 105^\circ$ 3. Find the exact value of $\tan \frac{7 \pi}{8}$ Solutions: 1. $\cos 112.5^\circ\\= \cos \frac{225^\circ}{2}\\= - \sqrt{\frac{1 + \cos 225^\circ}{2}} \\ = \sqrt{\frac{1 - \frac{\sqrt{2}}{2}}{2}}\\= - \sqrt{\frac{\frac{2 - \sqrt{2}}{2}}{2}}\\= - \sqrt{\frac{2 - \sqrt{2}}{4}}\\= - \frac{\sqrt{2 - \sqrt{2}}}{2}$ 2. $\sin 105^ \circ\\= \sin \frac{210^\circ}{2}\\= \sqrt{\frac{1 - \cos 210^\circ}{2}} \\= \sqrt{\frac{1 - \frac{\sqrt{3}}{2}}{2}}\\= \sqrt{\frac{\frac{2 - \sqrt{3}}{2}}{2}}\\= \sqrt{\frac{2 - \sqrt{3}}{4}}\\= \frac{\sqrt{2 - \sqrt{3}}}{2}$ 3. $\tan \frac{7 \pi}{8}\\= \tan \frac{1}{2} \cdot \frac{7 \pi}{4}\\= \frac{1 - \cos \frac{7 \pi}{4}}{\sin \frac{7 \pi}{4}} \\= \frac{1 - \frac{\sqrt{2}}{2}}{- \frac{\sqrt{2}}{2}}\\= \frac{\frac{2 - \sqrt{2}}{2}}{- \frac{\sqrt{2}}{2}}\\= - \frac{2 - \sqrt{2}}{\sqrt{2}}\\= \frac{- 2 \sqrt{2} + 2}{2}\\= - \sqrt{2} +1$ ### Concept Problem Solution Knowing the half angle formulas, you can compute $\sin 22.5^\circ$ easily: $\sin 22.5^\circ = \sin \left( \frac{45^\circ}{2} \right)\\=\sqrt{\frac{1-\cos 45^\circ}{2}}\\=\sqrt{\frac{1-\frac{\sqrt{2}}{2}}{2}}\\=\sqrt{\frac{\frac{2-\sqrt{2}}{2}}{2}}\\=\sqrt{\frac{2-\sqrt{2}}{4}}\\=\frac{\sqrt{2-\sqrt{2}}}{2}\\$ ### Practice Use half angle identities to find the exact value of each expression. 1. $\tan 15^\circ$ 2. $\tan 22.5^\circ$ 3. $\cot 75^\circ$ 4. $\tan 67.5^\circ$ 5. $\tan 157.5^\circ$ 6. $\tan 112.5^\circ$ 7. $\cos 105^\circ$ 8. $\sin 112.5^\circ$ 9. $\sec 15^\circ$ 10. $\csc 22.5^\circ$ 11. $\csc 75^\circ$ 12. $\sec 67.5^\circ$ 13. $\cot 157.5^\circ$ Use half angle identities to help solve each of the following equations on the interval $[0,2\pi)$ . 1. $3\cos^2(\frac{x}{2})=3$ 2. $4\sin^2 x=8\sin^2(\frac{x}{2})$
proofpile-shard-0030-354
{ "provenance": "003.jsonl.gz:355" }
# Calculating Energy I’m a little bit confused about the wording used for the functions `calculate_energy` and `calculate_min_energy_column`. What does `w` stand for? Assume, we have an entry in the matrix with coordinates (i, j) whereas `i` is the colum and `j` is the row. Then the position in the array is defined by `yx_index(j, i, width)`. Should I consider all entries with `i < w` or `yx_index(j, i, width) < w`? Hi, it is very helpful to look at why we need this w. Intuitively, the w tells you how many columns have already been removed. So how many black columns should be at the right boundary. Thus, `i < w` is the concrete answer to your question. If you would use `yx_index(j, i, width) < w` you would be ignoring rows of your image (or even only parts of a single row if w is chosen poorly). I hope this helps you Lukas 4 Likes
proofpile-shard-0030-355
{ "provenance": "003.jsonl.gz:356" }
StarkEx Perpetual Trading enables running a decentralized, perpetuals exchange that provides its users with self-custody, and settles transactions in a trustless manner. A perpetuals exchange is not intended to be a marketplace for NFTs. To create an NFT marketplace, use StarkEx Spot Trading. A position includes a collateral asset and one or more synthetic assets. Unlike spot trading, a trader can hold a leveraged position, which enables trading in asset amounts that are greater than the amount of funds actually invested. ## Margin When you create a StarkEx-powered perpetuals exchange, you define the requirements for an initial margin. The maintenance margin determines the level of leverage available to a trader, and when a position’s value falls below the maintenance margin, it is not well-leveraged. The maintenance margin is similar to total risk. ## Total value and total risk The total value of a position is the sum of the value of the position’s collateral and synthetic assets, expressed in the collateral currency. The total risk is a measurement that includes the total value of all synthetic assets in a position, and also takes into account a predetermined risk factor for each synthetic asset. As the risk factor increases, so does the total risk. You, the operator, determine the risk factor according to your business logic, and include it in the general configuration for your StarkEx instance. Total risk is related to the maintenance margin as follows: • When total value is equal to it’s total risk, it is exactly at the maintenance margin. • When total value is less than it’s total risk, it is below the maintenance margin. • When a position’s total value is greater than it’s total risk, it is above the maintenance margin. In the following examples, consider that Alice holds the following position: collateral 500 USDC. synthetic 4 BTC. 1 BTC = 20,000 USDC. `Risk_factor` = 0.5 6 ETH. 1 ETH = 1,000 USDC. `Risk_factor` = 0.1 Example: Total value The total value of Alice’s position is 116,500 USDC, calculated as follows: `500 + (4 * 20,000) + (6 * 6,000) = 116,500` Example: Total risk The total risk of Alice’s position is 116,000 USDC, calculated as follows: `500 + (4 * 20,000 * 0.5) + (6 * 6,000 * 0.1) = 43,600` ## Requirements for maintaining a position A position must be well-leveraged. That is, \text{total_value} \ge \text{total_risk}, which is another way of saying that the total value of a position must be above the maintenance margin. If 0 \lt \text{total_value} \lt \text{total_risk}, you can liquidate the position without the position owner’s signature by matching the position to another trader’s signed limit order that would result in the position becoming well-leveraged. If \text{total_value} \lt 0, you can deleverage the position by matching it to another position that has the opposite balance with respect to some assets, without signatures from either position owner. ## Factors that can affect a position’s balance The following factors can affect a position’s balance, potentially causing a position to be liquidated or deleveraged:
proofpile-shard-0030-356
{ "provenance": "003.jsonl.gz:357" }
# Mill's Constant Mill's Constant is defined as the smallest real number $\theta$ such that $\lfloor\theta^{3^n}\rfloor$ is always a prime number for all natural n. $\lfloor\theta^{3^n}\rfloor$ is the prime number theorem where $n$ can be any number and $\theta$ is an element from an set of numbers (that may be rational or irrational, and we are not sure) and Mill's Constant is the smallest element in that set. If the Riemann Hypothesis is true, Mill's constant is approximately $1.3063778838630806904686144926...$ and the primes it generates start as $2, 11, 1361, 2521008887, 16022236204009818131831320183,$ $4113101149215104800030529537915953170486139623539759933135949994882770404074832568499, ...$.
proofpile-shard-0030-357
{ "provenance": "003.jsonl.gz:358" }
# Non normalized set difference algorithm I'm trying to set up an algorithm in python for getting all sets from a set (DataSet1) less any instances of data in a second set (DataSet2). Objective: DataSet1: DataSet2: A B C A B C D 1 6 5 1 1 4 4 3 1 2 4 4 3 2 4 4 3 1 3 4 4 3 3 6 5 3 1 4 4 4 3 4 5 3 1 1 5 3 2 3 5 3 2 3 1 DataSet1 - DataSet2 = ResultSet ResultSet: A B C 1 6 5 1 2 4 4 3 Notice that the data has many repeat rows and when the difference operation is applied, the number of duplicate instances in DataSet1 are subtracted from the duplicate instances in DataSet2. The parameters of this exercise are such: 1. Extra columns in the subtrahend (DataSet2) must be ignored. 2. Instances of a record in DataSet1 that also exists in Dataset two must be be removed from DataSet1 until either there are no instances of the duplicate left in DataSet1 or there are no instances left in DataSet2. 3. In line with the above is a certian record is duplicated 3 times in DataSet1 and once in DataSet2 then two of those duplicates should remain in duplicate 1. Else if it's the other way around 1-3 = -2 so all duplicates of that record are removed from the returned DataSet. 4. We must assume that the name and number of columns, rows, index positions, are all unknown. My Algorithm So Far: import pandas as pd import numpy as np import copy def __sub__(self, arg): """docstring""" #Create a variable that holds the column names of self. We # will use this filter and thus ignore any extra columns in arg lstOutputColumns = self.columns.tolist() #Group data into normalized sets we can use to break the data # apart. These groups are returned usint pd.Dataframe.size() which # also gives me the the count of times a record orccured in the # origional data set (self & arg). dfGroupArg = arg.groupby(arg.columns.tolist(),as_index=False).size().reset_index() dfGroupSelf = self.groupby(lstOutputColumns,as_index=False).size().reset_index() #Merge the normalized data so as to get all the data that in the # subtrahend set (DataSet2) that matches a record in Dataset# and # we can forget about the rest. dfMergedArg = dfGroupArg.merge(dfGroupSelf, how="inner", on=lstOutputColumns) #Add a calculated column to the merged subtrahend set to get the # difference between column counts that our groupby.size() appened # to each row two steps ago. This all done using iloc so as to # avoid naming columns since I can't guarantee any particular column # name isn't already in use. dfMergedArg = pd.concat([dfMergedArg, pd.Series(dfMergedArg.iloc[:,-1] - dfMergedArg.iloc[:,-2])], axis=1) #The result of the last three steps is a DataFrame with only # rows that exist in both sets, with the count of the time each # particular row exists on the far left of the table along with the # difference between those counts. It should end up so that the # last three columns of the DataFrame are # (DataSet2ct),(DataSet1ct),(DataSet1ct-DataSet2ct) # Now we iterate through rows and construct a new data set based on # the difference in the last column. lstRows = [] for index, row in dfMergedArg.iterrows(): if row.iloc[-1] > 0: dictRow = {} dictRow.update(row) lstRows += [dictRow] * row[-1] #Create a new dataframe with the rows we created in the the #lst Variable. dfLessArg = pd.DataFrame(lstRows, columns=lstOutputColumns) #This next part is a simple left anti-join to get the rest of # data out of DataSet1 that is unaffected by DataSet2. dfMergedSelf = self.DataFrameIns.merge(dfGroupArg, how="left", on=lstOutputColumns) dfMergedSelf = dfMergedSelf[dfMergedSelf[0] == np.nan] #Now we put both datasets back together in a single DataFrame dfCombined = dfMergedSelf.append(dfLessArgs).reset_index() #Return the result return dfCombined[lstOutputColumns] This works, the reason i've posted it here is because it's not very efficient. The creation of the multiple DataFrames during a run cause it to be a memory hog. Also, the use of iterrows() I feel is like a last resort that inevitably results in slow execution. I think the problem is interesting though because its about dealing with really un-ideal data situations that (lets face it) occur all the time. Alright StackExchange - please rip me apart now! • Any reason for naming it __sub__? or is it a method inside some class? – hjpotter92 Jul 31 '18 at 13:10 • Its named sub because I overwrote the '-' operator in a class where i'm implementing it. The algorithm is what's important though. If the algorithm's good I could implement anywhere. – Jamie Marshall Jul 31 '18 at 16:20 You can remove the concatenation and the manual iteration over iterrows using pandas.Index.repeat; which uses numpy.repeat under the hood. You can feed this function an int, and each index will be repeated this amount of time; or an array of ints and each index will be repeated the amount of time the corresponding entry in the array. Combine that with filtering negative values and accessing elements by index using pandas.DataFrame.loc and you can end up with: dfMergedArg = dfGroupArg.merge(dfGroupSelf, how='inner', on=lstOutputColumns) dfNeededRepetitions = dfMergedArg.iloc[:, -1] - dfMergedArg.iloc[:, -2] dfNeededRepetitions[dfNeededRepetitions < 0] = 0 dfLessArg = dfMergedArg.loc[dfMergedArg.index.repeat(dfNeededRepetitions)][lstOutputColumns] Now the rest of the code would benefit a bit from PEP8, naming style (lower_case_with_underscore for variable names) and by not prefixing variable names with their type (dfSomething, lstFoo…). Lastly, checking for NaNs should be done using np.isnan and not ==: def __sub__(self, args): columns = self.columns.tolist() group_self = self.groupby(columns, as_index=False).size().reset_index() group_args = args.groupby(columns, as_index=False).size().reset_index() duplicated = group_args.merge(group_self, how='inner', on=columns) repetitions = duplicated.iloc[:, -1] - duplicated.iloc[:, -2] repetitions[repetitions < 0] = 0 duplicates_remaining = duplicated.loc[duplicated.index.repeat(repetitions)][columns] uniques = self.DataFrameIns.merge(group_args, how='left', on=columns) uniques = uniques[np.isnan(uniques.iloc[:, -1])][columns] return uniques.append(duplicates_remaining).reset_index()
proofpile-shard-0030-358
{ "provenance": "003.jsonl.gz:359" }
# Mismached fonts in matc [closed] Hi There, I'm using libreoffice for years to write scientific text due to its fantastic formula editor. I have just install libreoffice 4.2.5.2 with EN and HE languages on my new pc (win 8.1 operating in English but installed with Hebrew as well). The problem is that when I type a new formula I get non regular characters instead of the ones I type. For instance if I type the equal sign I get a floppy-disk icon in the display. I though maybe the default Liberation font is not installed so I moved all fonts to Times New Roman as a result now I get a clock instead of = but also the font reserved word displays as a clock. I found the clock icon replaces the inverted question mark and if no error in my formula the equal sign is still replaced by the floppy-disk icon. Note that I opened document written with previous version and the same happens. Any idea what could be wrong? p.s. I just tested LibreOffice 4.2.0.4 portable on the same system and same documents, there everything is normal so seems it's the new version or installation in cause. Thanks, OrenG edit retag reopen merge delete ### Closed for the following reason question is not relevant or outdated by Alex Kemp close date 2016-03-01 01:13:17.567228 Please make sure, that the OpenSymbol font is installed. That is a task for the operating system. You can download the file open____.ttf from http://cgit.freedesktop.org/libreoffi... if it is missing. Make sure, that you do not have established a font substitution for that file, unless you exactly know what you are doing. Make sure, that you do not have disabled the font in the settings of you operating system.
proofpile-shard-0030-359
{ "provenance": "003.jsonl.gz:360" }
Volume 414 - 41st International Conference on High Energy physics (ICHEP2022) - Astroparticle Physics and Cosmology Ending inflation with a bang: Higgs vacuum decay in $R^2$ gravity A. Mantziris Full text: Not available How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
proofpile-shard-0030-360
{ "provenance": "003.jsonl.gz:361" }
### In the news...hopefully UH Astronomer Uses Ultra-Sensitive Camera to Measure the Size of a Planet Orbiting a Distant Star A team of astronomers led by John Johnson of the University of Hawaii's Institute for Astronomy has used a new technique to measure the precise size of a planet around a distant star. They used a camera so sensitive that it could detect the passage of a moth in front of a lit window from a distance of 1,000 miles. The camera, mounted on the UH 2.2-meter telescope on Mauna Kea, measures the small decrease in brightness that occurs when a planet passes in front of its star along the line-of-sight from Earth. These "planet transits" allow researchers to measure the diameters of worlds outside our solar system. "While we know of more than 330 planets orbiting other stars in our Milky Way galaxy, we can measure the physical sizes of only the few that line up just right to transit," explains Johnson. The team studied a planet called WASP-10b, which was thought to have an unusually large diameter. They were able to measure its diameter with much higher precision than before, leading to the finding that it is one of the densest planets known, rather than one of the most bloated. The planet orbits the star WASP-10, which is about 300 light-years from Earth. IfA astronomer John Tonry designed the camera, known as OPTIC (Orthogonal Parallel Transfer Imaging Camera), and it was built at the IfA. It uses a new type of detector, an orthogonal transfer array, the same type used in the Pan-STARRS 1.4 Gigapixel Camera, the largest digital camera in the world. These detectors are similar to the CCDs (charge-coupled devices) commonly used in scientific and consumer digital cameras, but they are more stable and can collect more light, which leads to higher precision. "This new detector design is really going to change the way we study planets. It"s the killer app for planet transits," said team member Joshua Winn of MIT. The precision of the camera is high enough to detect transits of much smaller planets than previously possible. It measures light to a precision of one part in 2,000. For the first time, scientists are approaching the precision needed to measure transits of Earth-size planets. Bigger planets block more of the star's surface and cause a deeper brightness dip. The diameter of WASP-10b is only 6 percent larger than that of Jupiter, even though WASP-10b is three times more massive. Correspondingly, its density is about three times higher than Jupiter's. Because their interiors become partially degenerate, Jovian planets have a nearly constant radius across a wide range of masses. The photometric precision is three to four times higher than that of typical CCDs and two to three times higher than the best CCDs, and comparable to the most recent results from the Hubble Space Telescope for stars of the same brightness. Johnson is a National Science Foundation astronomy and astrophysics postdoctoral fellow working at the IfA. Working with Johnson and Winn are MIT graduate student Joshua Carter and Nicole Cabrera, a student at the Georgia Institute of Technology who spent the summer working with Johnson as a participant in the Research Experiences for Undergraduates program at the IfA. The scientific paper presenting this discovery will be published in the Astrophysical Journal Letters. A preprint is available on the Web at http://arxiv.org/abs/0812.0029. Built in 1970, the UH 2.2-meter telescope continues to produce cutting-edge scientific results. goooooood girl said… I like the part where you talk about the density of Jovian planets. That gives me hope. steph said… I think I saw that camera in the Best Buy ad this past Sunday. I'll get you one for Christmas. ### On the Height of J.J. Barea Dallas Mavericks point guard J.J. Barea standing between two very tall people (from: Picassa user photoasisphoto). Congrats to the Dallas Mavericks, who beat the Miami Heat tonight in game six to win the NBA championship. Okay, with that out of the way, just how tall is the busy-footed Maverick point guard J.J. Barea? He's listed as 6-foot on NBA.com, but no one, not even the sports casters, believes that he can possibly be that tall. He looks like a super-fast Hobbit out there. But could that just be relative scaling, with him standing next to a bunch of extremely tall people? People on Yahoo! Answers think so---I know because I've been Google searching "J.J. Barea Height" for the past 15 minutes. So I decided to find a photo and settle the issue once and for all. I then used the basketball as my metric. Wikipedia states that an NBA basketball is 29.5 inches in circumfe… ### Finding Blissful Clarity by Tuning Out It's been a minute since I've posted here. My last post was back in April, so it has actually been something like 193,000 minutes, but I like how the kids say "it's been a minute," so I'll stick with that. As I've said before, I use this space to work out the truths in my life. Writing is a valuable way of taking the non-linear jumble of thoughts in my head and linearizing them by putting them down on the page. In short, writing helps me figure things out. However, logical thinking is not the only way of knowing the world. Another way is to recognize, listen to, and trust one's emotions. Yes, emotions are important for figuring things out. Back in April, when I last posted here, my emotions were largely characterized by fear, sadness, anger, frustration, confusion and despair. I say largely, because this is what I was feeling on large scales; the world outside of my immediate influence. On smaller scales, where my wife, children and friends reside, I… ### The Force is strong with this one... Last night we were reviewing multiplication tables with Owen. The family fired off doublets of numbers and Owen confidently multiplied away. In the middle of the review Owen stopped and said, "I noticed something. 2 times 2 is 4. If you subtract 1 it's 3. That's equal to taking 2 and adding 1, and then taking 2 and subtracting 1, and multiplying. So 1 times 3 is 2 times 2 minus 1." I have to admit, that I didn't quite get it at first. I asked him to repeat with another number and he did with six: "6 times 6 is 36. 36 minus 1 is 35. That's the same as 6-1 times 6+1, which is 35." Ummmmm....wait. Huh? Lemme see...oh. OH! WOW! Owen figured out x^2 - 1 = (x - 1) (x +1) So $6 \times 8 = 7 \times 7 - 1 = (7-1) (7+1) = 48$. That's actually pretty handy! You can see it in the image above. Look at the elements perpendicular to the diagonal. There's 48 bracketing 49, 35 bracketing 36, etc... After a bit more thought we…
proofpile-shard-0030-361
{ "provenance": "003.jsonl.gz:362" }
# OTBN Tracer The tracer consists of a module (otbn_tracer.sv) and an interface (otbn_trace_if.sv). The interface is responsible for directly probing the design and implementing any basic tracking logic that is required. The module takes an instance of this interface and uses it to produce trace data. Trace output is provided to the simulation environment by calling the accept_otbn_trace_string function which is imported via DPI (the simulator environment provides its implementation). Each call to accept_otbn_trace_string provides a trace record and a cycle count. There is at most one call per cycle. Further details are below. A typical setup would bind an instantiation of otbn_trace_if and otbn_tracer into otbn_core passing the otbn_trace_if instance into the otbn_tracer instance. However this is no need for otbn_tracer to be bound into otbn_core provided it is given a otbn_trace_if instance. ## Trace Format Trace output is generated as a series of records. Every record has zero or more header lines, followed by zero or more body lines. There is no fixed ordering within the header lines or the body lines. The type of any line can be identified by its first character. The possible types for header lines are as follows: • S: Instruction stall. An instruction is stalled. • E: Instruction execute. An instruction completed its execution. • U: Wipe in progress. OTBN is in the middle of an internal wipe. • V: Wipe complete. An internal wipe has completed. The possible types for body lines are: • <: Register read: A register has been read. • >: Register write: A register has been written. • R: Memory load: A value has been loaded from memory. • W: Memory store: A value has been stored to memory. See the sections below for details of what information is in the different lines. A well-formed record has exactly one header line, but it’s possible for the tracer to generate other records if something goes wrong in the design. It is not the tracer’s responsibility to detect bugs; the simulation environment should flag these as errors in a suitable way. An instruction execution will be represented by zero or more S records, followed by one E record that represents the retirement of the instruction. The secure wipe phase at the end of OTBN’s operation will be represented by zero or more U records, followed by a final V record. Whilst the tracer does not aim to detect bugs, there may be instances where the signals it traces do something unexpected that requires special behaviour. Where this happens, the string "ERR" will be placed somewhere in the line that contains information about the unexpected signals. See information on Memory Write (W) lines below for an example. ERR will not be present in trace output for any other reason. ### Record examples (The first line of each example illustrates the instruction being traced to aid the example and is not part of the record) Executing BN.SID x26++, 0(x25) at PC 0x00000158: E PC: 0x00000158, insn: 0x01acd08b < w20: 0x78fccc06_2228e9d6_89c9b54f_887cf14e_c79af825_69be57d4_fecd21a1_b9dd0141 < x25: 0x00000020 < x26: 0x00000014 > x26: 0x00000015 W [0x00000020]: 0x78fccc06_2228e9d6_89c9b54f_887cf14e_c79af825_69be57d4_fecd21a1_b9dd0141 Executing BN.ADD w3, w1, w2 at PC 0x000000e8: E PC: 0x000000e8, insn: 0x002081ab < w02: 0x99999999_99999999_99999999_99999999_99999999_99999999_99999999_99999999 > w03: 0x1296659f_bbc28370_23634ee9_22168ae8_613491bf_0357f208_320054d4_ed103473 > FLAGS0: {C: 1, M: 0, L: 1, Z: 0} ## Line formats ### Instruction Execute (E) and Stall (S) lines These indicate that an instruction is executing or stalling. An ‘E’ line indicates the instruction completed in the trace record’s cycle. An instruction that is stalled will first produce a record containing an ‘S’ line and will produce a matching ‘E’ line in a future record on the cycle it unstalls and finishes. The line provides the PC and raw instruction bits. Instruction at 0x0000014c is 0x01800d13 and stalling (a future record will contain a matching ‘E’ line): S PC: 0x0000014c, insn: 0x01800d13 Instruction at 0x00000150 is 0x01acc10b is executing and will complete: E PC: 0x00000150, insn: 0x01acc10b ### Secure wipe (U and V) lines These indicate that a secure wipe operation is in progress. There is no other information, so the line consists of a bare U or V character. ### Register Read (<) and Write (>) lines These show data that has been read or written to either register files or special purpose registers (such as ACC or the bignum side flags). The line provides the register name and the data read/written Register x26 was read and contained value 0x00000018: < x26: 0x00000018 Register w24 had value 0xcccccccc_bbbbbbbb_aaaaaaaa_facefeed_deadbeef_cafed00d_d0beb533_1234abcd written to it: > w24: 0xcccccccc_bbbbbbbb_aaaaaaaa_facefeed_deadbeef_cafed00d_d0beb533_1234abcd Accumulator had value 0x00000000_00000000_00311bcb_5e157313_a2fd5453_c7eb58ce_1a1d070d_673963ce written to it: > ACC: 0x00000000_00000000_00311bcb_5e157313_a2fd5453_c7eb58ce_1a1d070d_673963ce Flag group 0 had value {C: 1, M: 1, L: 1, Z: 0} written to it: > FLAGS0: {C: 1, M: 1, L: 1, Z: 0} ### Memory Read (R) and Write (W) lines These indicate activity on the Dmem bus. The line provides the address and data written/read. For a read the data is always WLEN bits and the address is WLEN aligned (for an execution of LW only a 32-bit chunk of that data is required). For a write the write mask is examined. Where the mask indicates a bignum side write (BN.SID) full WLEN bit data is provided and the address is WLEN aligned. Where the mask indicates a base side write (SW) only 32-bit data is provided and the address is 32-bit aligned (giving the full address of the written chunk). Address 0x00000040 was read and contained value 0xcccccccc_bbbbbbbb_aaaaaaaa_facefeed_deadbeef_cafed00d_baadf00d_1234abcd: R [0x00000040]: 0xcccccccc_bbbbbbbb_aaaaaaaa_facefeed_deadbeef_cafed00d_baadf00d_1234abcd Address 0x00000004 had value 0xd0beb533 written to it: W [0x00000004]: 0xd0beb533 In the event of an OTBN bug that produces bad memory masks on writes (where the write is neither to a full 256 bits nor a aligned 32-bit chunk), an error line is produced giving the full mask and data W [0x00000080]: Mask ERR Mask: 0xfffff800_0000ffff_ffffffff_00000000_00000000_00000000_00000000_00000000 Data: 0xcccccccc_bbbbbbbb_aaaaaaaa_facefeed_deadbeef_cafed00d_baadf00d_1234abcd ## Using with dvsim To use this code, depend on the core file. If you’re using dvsim, you’ll also need to include otbn_tracer_sim_opts.hjson in your simulator configuration and add "{tool}_otbn_tracer_build_opts" to the en_build_modes variable.
proofpile-shard-0030-362
{ "provenance": "003.jsonl.gz:363" }
Implied density of the underlying If we know call option prices for every strike (underlying and expiry date being the same for all of the options), and the option price is a twice differentiable function of the strike, then we can calculate the probability density of the underlying on the expiry date. This density is implied by the option prices. If C(K) is the price of the option with strike K, then the implied density of the underlying on expiry date is C”(K). Remarkably, this does not imply any particular model for the underlying process. This fact is well known, but I could not find a proof of it anywhere in the literature. To me, the following informal reasoning sounds pretty convincing. Suppose we have a family of European call options on the same underlying S and the same expiry date T, with strikes K. The option price is a function of strike: C(K). If C is twice differentiable, then C”(K) is the probability density function of S(T) implied by the option prices. In other words, if we have a derivative whose pay-off is contingent only on S(T), then the price of this derivative is uniquely derived from C”(K). If  is the derivative’s pay-off function, then the derivative’s price V is given by Informal proof Firstly, we will show that the value of a binary option that pays 1 if S(T)>K and 0 otherwise is Consider contract X that consists of a bought call option with strike K and a sold call option with strike (K+δ), and take 1/δ of X. The price of 1/δ*X  is and its pay-off is If  then this pay-off tends to the pay-off of a binary option with strike K. At the same time the price of that contract tends to the negated derivative of C(K): by definition of the derivative. Therefore, the price of the binary option Next, we will consider a contract with pay-off contingent on S(T): and replicate this pay-off with binary options with different strikes. We choose a number ε and strikes for i=0,1,…n. Then we buy of binary call with strike (for each i=1,…n) and $f(0)$ of binary call with strike 0. The figure below shows the pay-off of this combination of binary options as solid line, and as dotted line. The cost of our sum of binary options will be When , the pay-off of the sum of binary options tends to $f(S(T))$. At the same time, so the price of the sum of the binary options is actually We notice that this is a Riemann sum for Thus we saw that this integral is the limit of the value of the portfolio of binary options which pay-off tends to $f(S(T))$. This means that C” represents the probability density of S(T) implied by the option prices. The above reasoning lacks mathematical rigor, but it shows how one can construct a portfolio that replicates a contingent claim with arbitrary pay-off $f(S(T))$. This entry was posted in Finance. Bookmark the permalink.
proofpile-shard-0030-363
{ "provenance": "003.jsonl.gz:364" }
# Project 3: Scheme Interpreter ## Due Friday, Oct 26, 2018 at 8pm In this project, you will implement an interpreter for a subset of the R5RS Scheme programming language. The main purpose of this exercise is to gain a deeper understanding of the foundational elements of a programming language and how a language operates under the hood. Secondary goals are to write a substantial piece of code in Python and to gain practice with functional language constructs such as recursion and higher-order functions. The project is divided into multiple suggested phases. We recommend completing the project in the order of the phases below. # Background An interpreter follows a multistep procedure in processing code. In a program file, code is represented as raw character data, which isn't suitable for interpreting directly. The first step then is to read the code and construct a more suitable internal representation that is more amenable to interpretation. This first step can be further subdivided into a lexing step, which chops the input data into individual tokens, and parsing, which generates program fragments from tokens. The end result of this reading process is a structured representation of a code fragment. Once an input code fragment has been read, the interpreter proceeds to evaluate the expression that the fragment represents [1]. This evaluation process happens within the context of an environment that maps names to entities. Evaluation is recursive: subexpressions are themselves evaluated by the interpreter in order to produce a value. [1] An interpreter for an imperative language, such as Python, will execute the code fragment if it represents a statement. Scheme, however, only has expressions, so the interpreter only evaluates code. Upon evaluating a code fragment, an interactive interpreter will proceed to print out a representation of the resulting value. It will then proceed to read the next code fragment from the input, repeating this process. This iterative combination of steps is often referred to as a read-eval-print loop, or REPL for short. Interactive interpreters often provide a REPL with a prompt to read in a new expression, evaluating it and printing the result. In this project, we have provided you with most of the implementation for the read step, though you will fill in a few remaining details in Phase 0. Your primary task, however, will be to implement the functionality needed by the eval step of the interpreter. We have also provided you with an implementation of the print step. ## Internal Representations The parser uses the following representations of Scheme entities: Scheme Data Type Internal Representation Numbers Python's built-in int and float types. Symbols Python's built in str type. Strings Python's built-in str type, where the first and last characters are the double quotes ". Booleans Python's built in True and False. Pairs The Pair class defined in scheme_core.py. Empty List The Nil object defined in scheme_core.py. ## Distribution Code $wget \ https://eecs490.github.io/project-scheme-interpreter/starter-files.tar.gz$ tar xzf starter-files.tar.gz Start by looking over the distribution code, which consists of the following files: ### Lexer and Parser buffer.py Utility classes for processing input. You should not have to change this file, and you do not need to understand how it works, but you will have to work with Buffer objects and thus you should be familiar with the current() and pop() methods of the Buffer class. scheme_tokens.py Lexer for Scheme. You should not have to change this file, and you do not need to understand how it works. scheme_reader.py Parser for Scheme. Most of the parser has been implemented for you, but there are a couple of small pieces you must complete, indicated by comments. In completing the parser, you will need to use the Pair class that is defined in scheme_core.py. Running python3 scheme_reader.py results in an interactive interface that reads in a Scheme expression and prints out a representation of the expression without evaluating it. ### Interpreter Core scheme_core.py The core data structures and logic for the Scheme interpreter. A few basic pieces have been provided for you; examine this code closely and make sure you understand what each function or class does. This file is where you will implement most of the project. scheme_primitives.py Primitive Scheme procedures that are defined in the global frame in Scheme. Many procedures have been implemented for you. Comments indicate where you will need to add or modify code. Most of the code you write will be in one of these two files. ### Interpreter Driver scheme.py The top-level driver of the Scheme interpreter, including the read-eval-print loop. You should not have to change anything unless you choose to implement Phase 4. An input file can be provided at the command line, as in: ### Test Files phase1_tests.scm Basic tests for Phase 1. phase2_all_tests.scm Various tests for Phase 2. phase2_and_or_tests.scm Tests for the and and or special forms. phase2_begin_tests.scm Tests for the begin special form. phase2_define_tests.scm Tests for the define special form. phase2_error_tests.scm Error tests for Phase 2. phase2_eval_tests.scm Tests for the eval procedure. phase2_if_tests.scm Tests for the if special form. phase2_lambda_tests.scm Tests for the lambda special form. phase2_let_tests.scm Tests for the let special form. phase2_letstar_tests.scm Tests for the let* special form. phase2_quote_tests.scm Tests for the quote special form. phase3_tests.scm Basic tests for Phase 3. phase4_tests.scm Basic tests for Phase 4. yinyang.scm The yin-yang puzzle in Scheme, another test for Phase 4. The provided tests can be run with the given Makefile. It contains the following targets: • all: run all tests for Phases 0-3 • phase0, ..., phase4: run the tests for an individual phase • phase2_all, phase2_and_or, ...: run an individual Phase 2 test, e.g. phase2_all_tests.scm, phase2_and_or_tests.scm, and so on ## Command-Line Interface Start the Scheme interpreter with the following command: $python3 scheme.py This will initialize the interpreter and place you in interactive mode. You can exit with an EOF (Ctrl-D on Unix-based systems, Ctrl-Z on some other systems). If you pass a filename on the command line, the interpreter will take input from the file instead: $ python3 scheme.py tests.scm You can use a keyboard interrupt (Ctrl-C) to exit while a file is being interpreted. If you use the -load command-line argument followed by Scheme filenames, the interpreter will interpret the code in the files and then place you in interactive mode in the resulting environment: $python3 scheme.py -load tests.scm If you pass the -e or --fail-on-error command-line arguments, the interpreter will allow exceptions to propagate to the top-level, producing a stack trace that can be useful for debugging: $ python3 scheme.py -e scm> () Traceback (most recent call last): File "scheme.py", line 108, in <module> main() File "scheme.py", line 104, in main File "scheme.py", line 25, in read_eval_print_loop handle_eval_result(result, expression, quiet) File "scheme.py", line 46, in handle_eval_result str(expression)) AssertionError: scheme_eval returned None: () ## Error Detection Your interpreter should detect erroneous Scheme code and report an error. The read-eval-print loop we provide you prints an error message when it enounters a Python exception, so it is sufficient to raise a Python exception when you detect an error. It is up to you what information to provide on an error, but we recommend providing a message that is useful for debugging. ## Known Departures from R5RS For simplicity, we depart from the Scheme standard in many places. The following is an incomplete list of discrepancies between our implementation and R5RS: • We do not support character literals or most of the number formats. • We are more lenient than the Scheme specification when it comes to identifiers. For instance, tokens such as +a or 9c are treated as valid identifiers. • The string-literal format implemented by the lexer, as well as the format in which strings are printed, follows the Python rather than the Scheme specification. • We do not support vectors. • There is a large set of standard procedures and forms that we do not implement. Though it is not required by the R5RS spec, your implementation must evaluate arguments to a procedure call in left-to-right order. # Phase 0: Parser Fill in the missing pieces of the Scheme parser in scheme_reader.py. • Modify scheme_read() to properly handle quotation markers. In particular, a single quote followed by an expression should result in a new expression that applies the quote special form to the following expression: '(1 2) --> (quote (1 2)) Your parser must also properly handle quasiquotation and both types of unquoting, though your Scheme interpreter is not required to support them. Refer to the Scheme documentation for the syntax of quasiquotation and unquoting. You will also find some tests in the docstring for scheme_read() that illustrate the expected behavior of the function. • Modify read_tail() to support dotted pairs. Again, refer to the Scheme documentation for their syntax. The argument to read_tail() is an instance of the Buffer class defined in buffer.py. You will need to use the current() method, which returns the current input token in the buffer, and pop(), which removes the current input token and returns it. (Note that the buffer discards whitespace, since whitespace is not considered an input token.) You should not have to use anything else in buffer.py. If more than one item appears after the dot, raise an exception as follows: raise SyntaxError('Expected one element after .') You can determine that there is only one item after the dot by reading the next expression and then making sure that the following item in the buffer is the closing parenthesis ')'. There are several tests in the docstring for read_tail() that you can look at as examples. When you are finished, execute the following from the command line to run the integrated doctests: $python3 -m doctest -v scheme_reader.py Alternatively, use the Makefile to run the doctests (this leaves out the -v flag): $ make phase0 This will run each of the tests in the docstrings for scheme_read() and read_tail() and compare the output to the expected output contained in the docstrings. You will also be able to start an interactive prompt where you can type in Scheme expressions to be parsed: ## Primitive Procedures Next, modify the primitive() and add_primitives() functions in scheme_primitives.py as needed so that primitive procedures are added to the global frame when the Scheme interpreter is started. The primitive() function is a higher-order function intended to be used as a decorator, as in the following: @primitive('boolean?') def scheme_booleanp(x): return x is True or x is False This is largely equivalent to the following: def scheme_booleanp(x): return x is True or x is False scheme_booleanp = primitive('boolean?')(scheme_booleanp) To make this work, primitive() takes in a sequence of names and returns a decorator function. The decorator function takes in a Python function, and for each name that was passed to primitive(), it needs to add an object representing a Scheme primitive with the given name and the given Python function as its implementation to the _PRIMITIVES list. In the example above, a Scheme primitive with the name boolean? and implementation scheme_booleanp() should be added to _PRIMITIVES. In order for this to work, you will have to come up with a representation of Scheme primitives that keeps track of the name and implementation of a primitive procedure. We recommend packaging this into an object that is a subtype of the provided SchemeExpr, and to place this code in scheme_core.py. You will need to override the is_procedure() method to return true for objects that represent primitive procedures. ## Evaluating Symbols The interpreter code we provide can evaluate primitive values (e.g. numbers and strings), as you can see by examining scheme_eval() in scheme_core.py. The scheme_eval() function is the evaluator of the intepreter. It takes in a Scheme expression, in the form produced by the parser, and an environment, evaluates the expression in the given environment, and returns the result. You can start the interpreter and type in primitive values, which evaluate to themselves: $python3 scheme.py scm> 3 3 scm> "hello world" "hello world" scm> #t #t Modify scheme_eval() to support evaluating symbols in the current environment. This will allow you to evaluate symbols that are bound to primitive functions: scm> = [primitive function =] The interpreter printout is dependent on your implementation, and you can implement the special __str__() method for your representation of primitives to produce the output you want. You do not have to match the output above. If a symbol is undefined in the environment when it is evaluated, raise a Python exception, using code such as the following: raise NameError('unknown identifier ' + name) This will be caught by the Scheme read-eval-print loop, and its message will be reported to the user: scm> undefined Error: unknown identifier undefined ## Evaluating Call Expressions Design and implement a process for evaluating Scheme expressions consisting of lists, enabling evaluation of procedure calls. Place this code in scheme_core.py. Modify scheme_eval() as necessary to run this code when it encounters a list. A list is evaluated by evaluating the first operand. If the result is a Scheme procedure, then the remaining operands are evaluated in order from left to right, and the procedure is applied to the resulting argument values. Special forms have a different evaluation procedure, which we will see in subsequent phases. If a list is ill-formed (i.e. it does not end with the null list), or if the first operand does not evaluate to a Scheme procedure (or special form in the later phases), your interpreter should raise an exception. You will need to implement support for applying a primitive procedure to its arguments. This should allow you to evaluate expressions such as: scm> (boolean? #t) #t scm> (not #t) #f scm> (+ 1 2 3) 6 ## Implementing Primitive Procedures Implement the remaining primitive procedures (except eval) in scheme_primitives.py. Check the comments for what procedures need to be added, and what their behavior should be. See the implementation of similar procedures for hints on how to write them. Defer implementation of eval until Phase 2, since you will not be able to test it until you have implemented the quote special form. For apply, make sure to raise an exception if the first argument does not evaluate to a Scheme procedure, or if the last argument is not a list. ## Tests When this phase is complete, you will be able to run the provided tests for the phase: $ make phase1 Alternatively: $python3 -m doctest scheme_core.py$ python3 scheme_test.py phase1_tests.scm Make sure to write your own tests as well, as only a few tests are provided for you. # Phase 2: Special Forms Extend the evaluation procedure in your interpreter to handle special forms, and implement the special forms below. Except where noted, their behavior should match that in the Scheme specification. Your code to implement this phase should be placed in scheme_core.py. You will need to come up with a representation for special forms that keeps track of the name of the form and its implementation in Python. This is analagous to the representation of primitive procedures in Phase 1. Specifically, we recommend the following: • Implement a special form as a Python function. • Define a class for special forms that keeps track of the name and the Python function that implements the form. Override is_special_form() to return true for an object of this class. • Write a decorator for special forms that is similar to the primitive decorator in the starter code. • Modify scheme_eval() such that if the first subexpression of a call expression evaluates to a special form, the Python function for handling that special form is called. You will need to pass both the remainder of the call expression and the current environment to this function. In standard R5RS Scheme, symbols that represent special forms are not reserved. Thus, it is possible to define a variable with the name if, define, etc. Your interpreter should allow this behavior by defining special forms in the global frame and allowing their names to be redefined in both the global frame and in child frames. You will need to complete the add_special_forms() function that will install the special forms in the given environment. ## User-Defined Procedures A user-defined procedure can be introduced with the lambda special form. You will need a representation of a user-defined procedure that keeps track of the definition environment (since Scheme procedures are statically scoped), the parameter list, and the body of the procedure. We recommend defining a subclass of SchemeExpr that represents user-defined procedures. You will need to override the is_procedure() method to return true for an object representing a primitive or user-defined procedure. You only have to implement lambdas that take a fixed number of arguments (the first form mentioned in Section 4.1.4 of the Scheme spec). Evaluating the lambda expression itself requires the following: • Check the format of the expression to make sure it is free of errors. Refer to the R5RS spec for the required format and what constitutes an error. • Create an object representing a user-defined procedure. Save a reference to the definition environment, the list of parameters, and the body of the lambda in this object. • The resulting value of the lambda expression is the newly created procedure object. You will also need to add support for applying a user-defined procedure to an argument list. More specifically, scheme_eval() will need to properly handle call expressions where the first subexpression evaluates to a user-defined procedure. The process for applying a user-defined procedure is as follows: • Evaluate the argument expressions in order from left to right. • Check that the number of arguments matches the number of parameters required by the procedure. • Create a new environment that extends the definition environment by a single empty frame. Use the extend() method of an environment to do so. • Bind the parameter names to the argument values within the context of the newly created frame. • Evaluate the body in the context of the new environment. Raise a Python exception if an error is detected in either defining or applying a user-defined procedure. ## Derived Forms The following forms can be implemented by translating them to simpler forms. Do not repeat yourself! If a translation is possible, construct the translated expression and evaluate that rather then repeating code. ### Definitions You are required to implement the first two forms for define listed in Section 5.2 of the R5RS spec. • The first form binds a variable to the value of an expression. You will need to evaluate the expression in the current environment and bind the given name to the resulting value in the current frame. This form cannot be translated into a simpler form. • The second form defines a function. You only have to handle a fixed number of parameters, so you need not consider the case where the formals contain a period. Make use of the equivalence mentioned in the Scheme spec. Construct the lambda expression by appropriately using the Pair class, evaluate it, and bind the variable to the result. You do not have to check that define is at the top level or beginning of a body. For this project, the define form should evaluate to the name that was just defined: scm> (define x 3) x scm> (define (foo x) (+ x 1)) foo ### Binding Constructs Implement the let form, described in Section 4.2.2 of the Scheme spec. Use the following translation to a lambda definition and application: \begin{align*} (\texttt{let}~ (&(name_1~ expr_1)\\ &...\\ &(name_k~ expr_k))\\ body&)\\ \Longrightarrow&\\ ((\texttt{lam}&\texttt{bda}~ (name_1~ ...~ name_k)~ body)\\ expr&_1~ ...~ expr_k) \end{align*} You do not have to implement the "named let" form described in Section 4.2.4. Also implement the let* form from Section 4.2.2. This can be translated to let using the following recursive rules: • Base case: if there are no bindings or only one, then let* is equivalent to let. Thus: \begin{align*} (\texttt{let*}~ ()~ body)~~~ &\Longrightarrow~~~ (\texttt{let}~ ()~ body)\\ (\texttt{let*}~ ((name~ expr))~ body)~~~ &\Longrightarrow~~~ (\texttt{let}~ ((name~ expr))~ body) \end{align*} • Recursive case: if there are two or more bindings, then the first is moved to its own let, whose body becomes the let* minus its first binding: \begin{align*} (\texttt{let*}~ (&(name_1~ expr_1)\\ &(name_2~ expr_2)\\ &...\\ &(name_k~ expr_k))\\ body&)\\ \Longrightarrow&\\ (\texttt{let}~ ((name&_1~ expr_1))\\ (\texttt{let*}~ (&(name_2~ expr_2)\\ &...\\ &(name_k~ expr_k))\\ body&)\\ )~~~~~~~~~~~~~~~~~~~&\\ \end{align*} ## Other Forms Implement the following standard forms. Refer to the R5RS spec for their semantics. • begin: This does not introduce a new frame, so you cannot translate this to a lambda. • if: If the test yields a false value and there is no alternate, then the conditional should evaluate to the predefined Okay object. • and • or ## Quotation Implement the quote form, which merely returns its argument without evaluating it. You do not have to implement quasiquote, unquote, or unquote-splicing. ## eval Procedure Implement the procedure eval in scheme_primitives.py: (eval expression environment) where expression is a valid Scheme expression represented as data and environment is an environment object that resulted from a call to scheme-report-environment or null-environment. This should evaluate expression in the given environment. The following are some examples: scm> (eval '(+ 1 3) (scheme-report-environment 5)) 4 scm> (define env (scheme-report-environment 5)) env scm> (eval '(define x 3) env) x scm> (eval 'x env) 3 scm> (eval '(+ 1 x) env) 4 Make sure to raise an exception if the second argument to eval is not an environment. ## Errors We recommend translating special forms to more fundamental equivalents where possible, to simplify the tasks of error checking and of implementing continuations in the optional Phase 4. Your implementation of each special form must check for errors where appropriate and raise a Python exception if an error occurs. Examples of errors include a variable definition that is provided more than one expression, a definition with an improper name, a procedure with multiple parameters with the same name, an if with less than two arguments, and so on. Specific examples of these: (define x 3 4) (define 4 5) (lambda (x y x) 3) (if #t) Refer to the Scheme documentation for what constitutes erroneous cases for each special form. ## Tests When this phase is complete, you will be able to run the provided tests for the phase: $make phase2 You can also run an individual test file for this phase, as in the following: $ make phase2_begin Make sure to write your own tests as well. # Phase 3: Tail-Call Optimization Scheme implementations are required to be properly tail-recursive, and they must perform tail-call optimization where possible. We recommend you initially implement your interpreter without tail-call optimization. Once you have the core functionality implemented, you can then restructure your interpreter to support tail-call optimization. You must support it in all contexts required by the Scheme specification. Proper tail recursion requires that your interpreter use a constant number of active Scheme frames for tail-recursive procedures, meaning that the sizes of the environments in scheme_eval() remain constant. In addition, your interpreter must use a constant amount of memory in Python -- it cannot recursively call scheme_eval() in tail contexts, since such a call creates an additional Python stack frame. Instead, you will need to iteratively evaluate tail expressions rather than recursively calling scheme_eval(). You will need to do the following to accomplish this: • Define a class that encapsulates a tail expression with its environment. • Instead of calling scheme_eval() from a tail context of a special form, return an object representing the tail expression and its environment. • Modify scheme_eval() to handle tail expressions. Evaluation will need to be performed in a loop, and encountering a tail expression should repeat the loop with the expression and environment extracted from the tail-expression object. On the other hand, if the result of evaluation is not a tail expression, then scheme_eval() should return that result. You will still need to recursively call scheme_eval() in non-tail contexts. When this phase is complete, you will be able to run the provided tests for the phase: $make phase3 Without tail-call optimization, your interpreter will encounter a RecursionError on this test due to the recursive calls to scheme_eval(). Once you've implemented tail-call optimization, the test should work correctly. Make sure to write your own tests to ensure that tail-call optimization is applied in all required contexts. # Phase 4 (Optional): Continuations and call/cc This phase will require you to modify scheme.py. As such, if you choose to implement it, we recommend making the required modifications in a separate copy of your project, such as a separate branch if you are using git. A Scheme feature you may implement is continuations, along with the call-with-current-continuation special form. In addition, support the call/cc shorthand for call-with-current-continuation. For this phase, do not implement values, call-with-values, or dynamic-wind. A continuation represents the entire intermediate state of a computation. When you encounter call/cc, your interpreter will need to record the current execution state. This will require backtracking through the execution stack and packaging up the state at each point in a format that will allow you to reconstruct the execution stack whenever the continuation is invoked. When you build a continuation, the actual call to call/cc needs to be replaced by a "hole" that can be filled in when the continuation is invoked. When a continuation is invoked, you should not repeat any computations that have been completed. Thus, the continuation for (begin (display 3) (+ 2 3) (+ 1 (call/cc foo)) (- 3 5)) should conceptually represent (begin (+ 1 <hole>) (- 3 5)) where the hole is filled in when the continuation is invoked. After building a continuation, you should immediately resume the newly built continuation, with the hole filled in with a call to the target of the call/cc and the continuation object as its argument. In the example above: (begin (+ 1 (foo <continuation>)) (- 3 5)) A continuation object can be invoked an arbitrary number of times. It must take a single argument when it is invoked, such as: (<continuation> 2) When a continuation is invoked, the interpreter must abandon the current executation state and resume the invoked continuation instead. The argument of the continuation object fills the hole in the continuation: (begin (+ 1 2) (- 3 5)) Abandoning the current execution state requires unwinding the current computation until you reach the read-eval-print loop. (You should support an unbounded number of continuation invocations, so it is not acceptable to recursively call the read-eval-print loop.) Consider using a Python exception to facilitate abandoning the execution state. Once that is done, resume the computation represented by the invoked continuation. When this phase is complete, you will be able to run the provided tests for the phase: $ make phase4 You will also be able to run the yin-yang puzzle as follows: \$ python3 scheme.py yinyang.scm Since this phase is optional, it will not be graded, and it will not be tested on the autograder. Regardless of whether you complete this phase, we recommend turning in a copy of your project that does not contain this phase. The autograded portion of this project will constitute approximately 90% of your total score, and the remaining 10% or so will be from hand grading. The latter will evaluate the comprehensiveness of your test cases as well as your programming practices, such as avoiding unnecessary repetition. In order to be eligible for hand grading, your project must achieve at least half the points on the autograded portion (i.e. about 45% of the project total). You are required to adhere to the coding practices in the course style guide. We will use the automated tools listed there to evaluate your code. You can run the style checks yourself as described in the guide. # Submission All code that you write for the interpreter must be placed in scheme_reader.py, scheme_primitives.py, or scheme_core.py. We will test all three files together, so you are free to change interfaces that are internal to these files. You may not change any part of the interface that is used by scheme.py or scheme_test.py.
proofpile-shard-0030-364
{ "provenance": "003.jsonl.gz:365" }
# Analysis and Partial Differential Equations The aim of this seminar day is to bring together twice a year specialists, early career researchers and PhD students working in analysis, partial differential equations and related fields in Australia, in order to report on research, fostering contacts and to begin new research projects between the participants. This seminar day is organised jointly with the related research groups of the Australian National University, Macquarie University, University of Newcastle, University of New South Wales, University of Sydney, University of Wollongong, and supported by the Australian Mathematical Sciences Institute (AMSI). In particular, this event has the intention to give PhD students and early career researchers the opportunity to present their research to a wider audience. Every interested researcher is invited to attend and participate at this event. Please register if you would like to attend (free). AMSI supports students and early career researchers without access to a suitable research grant or other sources, see below for further information. ## Venue: University of Wollongong: McKinnon Building (Building 67), Room 302. See also the information on how to get there. ## Program 09:55–10:00 - Welcome 10:00–10:30 - Rodney Nillsen (Wollongong) Hilbert spaces of functions that are sums of finite differences 10:30–10:50 - Joshua Peate (Macquarie) Riesz transforms in the absence of preservation 10:50–11:10 - Morning Tea 11:10–11:30 - Christopher Thornett (Sydney) Periodic-parabolic eigenvalue problems with a large parameter 11:30–12:10 - Bishnu Lamichhane (Guest Speaker, Newcastle) A stabilized mixed finite element methods based on nearly incompressible elasticity 12:10–13:40 - Lunch Break 13:40–14:10 - Xuan Duong (Macquarie) Besov and Triebel-Lizorkin spaces associated with Hermite operators 14:10–14:40 - Pierre Portal (ANU) Non-autonomous parabolic systems with rough coefficients 14:40–15:10 - Galina Levitina (UNSW) The spectral shift function and the Witten index 15:10–15:30 - Afternoon Tea 15:30–16:00 - Ian Doust (UNSW) A logarithmic Sobolev inequality for the invariant measure of the periodic KdV equation 16:00–16:30 - John Harrison (Newcastle) Asymptotic behaviour of random walks on certain matrix groups 16:30–16:50 - Sean Gomes (ANU) Quantum Ergodicity on the Mushroom Billiard ## Abstracts of Talks ### A logarithmic Sobolev inequality for the invariant measure of the periodic KdV equation Ian Doust (University of New South Wales) #### Abstract Logarithmic Sobolov inequalities arose in quantum field theory and were introduced to describe smoothing properties of Markov semigroups. In 1975 L. Gross proved a log-Sob inequality for the Gaussian measure on ${ℝ}^{n}$, and the question immediately arose as to just which measures satisfied a similar inequality. The periodic KdV equation ${u}_{t}={u}_{xxx}+\beta u{u}_{x}$ arises from a Hamiltonian system with an infinite dimensional phase space ${L}^{2}\left(T\right)$. J. Bourgain showed that there is a Gibb’s measure $\nu ={\nu }_{r}^{\beta }$ on each closed ball ${B}_{r}$ of radius $r$ in ${L}^{2}\left(T\right)$ such that the Cauchy problem for this PDE is well-posed on the support of $\nu$, and such that $\nu$ is invariant under the KdV flow. In this talk I shall discuss some joint work with Gordon Blower (Lancaster) in which we prove that these measures satisfy logarithmic Sobolov inequalities. These log-Sob inequalities are then used to prove concentration inequalities for Lipschitz functions on the balls ${B}_{r}$. Back to Program ### Besov and Triebel-Lizorkin spaces associated with Hermite operators Xuan Duong (Macquarie University) #### Abstract Consider the Hermite operator $H=-\Delta +|x{|}^{2}$ on the Euclidean space ${ℝ}^{n}$. In this talk, we develop a theory of homogeneous and inhomogeneous Besov and Triebel-Lizorkin spaces associated with the Hermite operator. Applications include the boundedness of negative powers and spectral multipliers of the Hermite operators on some appropriate Besov and Triebel-Lizorkin spaces. This is joint work with The Anh Bui which appeared in Journal of Fourier Analysis and Applications recently (doi:10.1007/s00041-014-9378-6). Back to Program ### Asymptotic behaviour of random walks on certain matrix groups John Harrison (University of Newcastle) #### Abstract Random walks have been used to model stochastic processes in many scientific fields. I will introduce invariant random walks on groups, where the transition probabilities are given by a probability measure. The Poisson boundary will also be discussed. It is a space associated with every group random walk that encapsulates the behaviour of the walks at infinity and gives a description of certain harmonic functions on the group in terms of the essentially bounded functions on the boundary. I will then discuss my attempts to describe the boundary for a certain family of upper-triangular matrix groups. Back to Program ### A stabilized mixed finite element methods based on nearly incompressible elasticity Bishnu Lamichhane (University of Newcastle) #### Abstract We present a finite element method for nearly incompressible elasticity using a mixed formulation of linear elasticity in the displacement-pressure form. We combine the idea of stabilization of an equal order interpolation for the Stokes equations with the idea of biorthogonality to get rid of the bubble functions used in an earlier publication with a biorthogonal system. We work with a Petrov-Galerkin formulation for the pressure equation, where the trial and test spaces are different and form a $g$-biorthogonal system. This novel approach leads to a displacement-based low order finite element method for nearly incompressible elasticity for simplicial, quadrilateral and hexahedral meshes. Numerical results are provided to demonstrate the efficiency of the approach. Back to Program ### The spectral shift function and the Witten index Galina Levitina (University of New South Wales) #### Abstract The Witten index of an operator $T$ can be considered as a substitution for the Fredholm index of $T$, whenever the operator $T$ ceases to be Fredholm. The Witten index is closely related to the notion of the spectral shift function. In particular, if $A$ is a self-adjoint operator on a Hilbert space $H$, $B$ is a self-adjoint bounded operator on $H$ and $\theta$ is a parameter function on $ℝ$, then the Witten index of the operator ${D}_{A}=\frac{d\phantom{\rule{0.3em}{0ex}}}{dt}\otimes 1+1\otimes A+{M}_{\theta }\otimes B$ on the Hilbert space ${L}^{2}\left(ℝ\right)\otimes H$ can be computed as the value of the spectral shift function for the pair $\left(A+B;A\right)$ at zero, where ${M}_{\theta }$ is the operator given by multiplication by the function $\theta$. However, the assumptions on the operators $A$ and $B$ rules out the classical differential operators even in low dimensions. We generalize the earlier results and compute the actual value of the Witten index of the operator ${D}_{A}=\frac{d\phantom{\rule{0.3em}{0ex}}}{dt}\otimes 1-1\otimes i\frac{d\phantom{\rule{0.3em}{0ex}}}{dx}+{M}_{\theta }\otimes {M}_{f}$, $dom\left({D}_{A}\right)={W}^{1,2}\left({ℝ}^{2}\right)$ on the Hilbert space ${L}^{2}\left({ℝ}^{2}\right)={L}^{2}\left(ℝ\right)\otimes {L}^{2}\left(ℝ\right)$. Back to Program ### Quantum Ergodicity on the Mushroom Billiard Sean Gomes (Australian National University) #### Abstract In quantum chaos, we make use of tools from harmonic and microlocal analysis to examine how the dynamical features of a classical Hamiltonian system are reflected in the behaviour of the PDE that governs its quantisation. The quantum ergodicity theorem of Shnirelman, Colin de Verdière, and Zelditch is a cornerstone theorem in this field, which states that if a classical Hamiltonian system has ergodic flow with respect to the Liouville measure, then the quantum evolution satisfies an analogous equidistribution property. Much less well understood is the quantum evolution of mixed systems, whose phase spaces divide into multiple invariant subsets, only some of which are ergodic. In this talk I will present my recent result which establishes a longstanding conjecture of Percival for the simplest such system, the mushroom billiard. Back to Program ### Hilbert spaces of functions that are sums of finite differences Rodney Nillsen (University of Wollongong) #### Abstract Finite differences have long been used to approximate derivativeso of functions. It may be less well known that on the first order Sobolev space of the circle group or the real line, the derivative of a function is equal to the sum of three first order differences, and that this number three is sharp – that is, the statement fails with two differences in place of three. Such results involve the behaviour of the Fourier transforms of a function near the origin or, in the case of other differential operators, with the zeros of the Fourier transform of a function on a given subset of the integers, or the behaviour of the Fourier transform near a subset of the real line. In this talk, Hilbert spaces arising from finite sums of differences and generalised differences on the circle group and the real line will be discussed, and some associated sharpness results described. Back to Program ### Riesz transforms in the absence of preservation Joshua Peate (Macquarie University) #### Abstract Riesz transforms have long been studied in mathematical analysis. A common condition for ${L}^{p}$ boundedness of Riesz transforms, $p>2$, is a preservation condition. Suppose now that the preservation condition does not hold: ${e}^{-tL}1\ne 1$, and consider the boundedness of a generalised Riesz transform $\nabla {L}^{-1∕2}$ defined on this space. In this talk I will discuss methods of proving ${L}^{p}$ boundedness, $p>2$, of such a transform on such a space. These methods are highly focused on heat kernel bounds and various analytical heat kernel properties. Applications will be to subsets of ${ℝ}^{n}$ and a particular reliance on types of Gaffney and Hardy operators will be discussed. Back to Program ### Non-autonomous parabolic systems with rough coefficients Pierre Portal (Australian National University) #### Abstract In the late 1950s, two schools obtained a series of results on non-autonomous linear parabolic equations with bounded measurable (in space and time) coefficients. Lions’ school, on the one hand, used form methods. They could handle systems, but had to work with ${L}_{2}$ data. Nash and Aronson, on the other hand, used heat kernel methods, and obtained regularity results that allowed them to handle ${L}_{p}$ data. Their method, however, could not be applied to systems. Ever since, results for systems and ${L}_{p}$ data have been limited to coefficients that are Holder regular in time. Pascal Auscher, Sylvie Monniaux, and I have developed, over the past four years, a new approach to the problem that lifts these limitations. For $p\ge 2$, we obtain existence and uniqueness results for systems with bounded measurable coefficients in space and time, and Lp data. For a range of values of $p$ below $2$, we obtain existence and uniqueness results for Lp data, and systems with coefficients that are $BV$ in time and bounded measurable in space. Our approach has its origin in elliptic boundary value problems on rough domains. We consider our evolution equation as a boundary value problem in a rough space-time domain, and extend singular integrals, functional calculus, and Hardy space methods originally designed for elliptic boundary value problems. Back to Program ### Periodic-parabolic eigenvalue problems with a large parameter Christopher Thornett (University of Sydney) #### Abstract We investigate a periodic-parabolic problem with a parameterised weight function. We are particularly interested in the behaviour of the principal eigenvalue and associated eigenfunction as the parameter goes to infinity. The principal eigenvalue has already been studied by Hess, Du and Peng, but rather than approach the problem directly we instead look at the evolution problem and associated evolution operator. This allows for greater generality and gives more information about the associated positive eigenfunction; in particular, if the weight function has regular enough support, the limiting problem is in some sense just the evolution problem on a restricted, non-cylindrical domain. This has applications to periodic-parabolic logistic-type population problems. Back to Program ## Organisers Daniel Daners (USyd) Ian Doust (UNSW) Xuan Duong (Macquarie) Daniel Hauer (USyd) Andrew Hassel (ANU) Jeff Hogan (Newcastle) Bishnu Lamichhane (Newcastle) Ji Li (Macquarie) James McCoy (Wollongong) Alan McIntosh (ANU) Pierre Portal (ANU) Adam Sikora (Macquarie) Glen Wheeler (Wollongong) Valentina-Mira Wheeler (UOW) ## Travel Support available from AMSI This event is sponsored by the Australian Mathematical Sciences Institute (AMSI). AMSI allocates a travel allowance annually to each of its member universities. Students or early career researchers from AMSI member universities without access to a suitable research grant or other source of funding may apply to the Head of Mathematical Sciences for subsidy of travel and accommodation out of the departmental travel allowance. For applications to travel fundings please see research.amsi.org.au/travel-funding.
proofpile-shard-0030-365
{ "provenance": "003.jsonl.gz:366" }
## Abstract Though counseling is one commonly pursued intervention to improve college enrollment and completion for disadvantaged students, there is relatively little causal evidence on its efficacy. We use a regression discontinuity design to study the impact of intensive college counseling provided by a Massachusetts program to college-seeking, low-income students that admits applicants partly on the basis of a minimum grade point average requirement. Counseling shifts enrollment toward four-year colleges that are less expensive and have higher graduation rates than alternatives students would otherwise choose. Counseling also improves persistence through at least the second year of college, suggesting a potential to increase the degree completion rates of disadvantaged students. ## 1.  Introduction Although college enrollment among low-income students has increased steadily over the last decade, the share of students from the lowest-income families that enroll in college continues to lag considerably behind college entry rates among the highest-income students (Baum, Ma, and Payea 2013). Furthermore, gaps in college completion by family income have only widened over time; among students who graduated high school in the late 1990s and early 2000s, 54 percent of students from the highest-income quartile had earned a bachelor's degree by age 25 compared with only 9 percent of students from the lowest-income quartile (Bailey and Dynarski 2011). Despite substantial economic returns associated with completing college—especially for low-income students—there are various financial and informational barriers that may prevent economically disadvantaged students from accessing higher education at all, or from selecting institutions that are well matched to their abilities and circumstances. Lower-income students and their families tend to overstate the net costs of going to college, may have difficulty identifying the full set of colleges and universities to which they would be academically admissible, and may not understand the variation in college quality or affordability among different higher education institutions (Horn, Chen, and Chapman 2003; Avery and Kane 2004; Grodsky and Jones 2007; Hoxby and Avery 2013; Hoxby and Turner 2013). Students may also be uncertain about where they can access professional assistance with college or financial aid applications, and as a result may forego completing these applications entirely or may miss out on key deadlines (Bettinger et al. 2012; Hoxby and Turner 2013; Castleman and Page 2016). Policy interventions to ameliorate socioeconomic inequalities in college entry and success have historically focused on increasing college access among students from economically disadvantaged backgrounds. The earnings premia associated with college primarily accrue, however, not based on whether students have completed some college but rather based on whether they earn a degree (Baum, Ma, and Payea 2013). This relationship between earnings and degree attainment, combined with growing concerns about loan debt that students accumulate in order to pursue higher education, has prompted heightened focus on whether students are attending institutions where they are well positioned for success. Recent research suggests that students who attend higher-quality institutions, as measured by institutional characteristics like six-year graduation rates, are more likely to persist in college and earn a degree (Hoxby and Turner 2013; Cohodes and Goodman 2014; Goodman, Hurwitz, and Smith 2017). At the same time, as many as half of low-income students neither apply to nor attend the quality of institution at which they appear admissible based on their academic credentials (Bowen, Chingos, and McPherson 2009; Hoxby and Avery 2013; Smith, Pender, and Howell 2013). A more recent set of policy interventions has emerged to: (1) guide students to choose colleges where they have a good probability of earning a degree without incurring excessive debt, and (2) provide ongoing support to students once they have matriculated in college. One example of these policy interventions is to provide high-achieving, low-income students with customized information about their postsecondary options, which can result in students attending and persisting at higher-quality institutions (Hoxby and Turner 2013). Although this type of low-touch, informational intervention has received considerable attention and interest, many communities rely on more intensive college advising models to improve both overall college access and choice among low-income students. These interventions are typically run by community-based nonprofit organizations, and provide individualized guidance to students throughout the college search, application, and financial aid processes. Though community-based college advising programs have existed for decades, there is relatively little causal evidence documenting their impact on important student outcomes, including the quality and affordability of the institution at which students enroll and whether they persist and succeed in college. Existing research evidence is mixed. Recent pilot experiments suggest that intensive college advising can substantially increase enrollment at four-year institutions, though these studies have not followed students longitudinally to investigate whether the advising contributes to improved persistence and success (Avery 2010, 2013). Similarly, providing students with intensive peer mentoring during the second half of their senior year can substantially increase the share of students who enroll and persist in college (Carrell and Sacerdote 2013). An experimental evaluation of the federally funded Upward Bound program failed, however, to find any improvement in students’ postsecondary outcomes (Seftor, Mamun, and Schirm 2009). Hurwitz and Howell (2014) exploit maximum student–counselor ratios to generate regression discontinuity estimates showing that additional high school counselors increase four-year college enrollment rates, though their estimates are somewhat imprecise.1 Additional rigorous evidence on the efficacy of intensive college advising programs would be of considerable value to researchers and policy makers. Although these programs cost much more than low-cost informational interventions, they may be more effective at improving postsecondary pathways for a more academically mainstream population of students. And to the extent they contribute to meaningful increases in degree attainment, the long-term benefits may justify sizeable upfront expenditures. To address this gap in the literature, we evaluate the impact of an intensive college advising program—called Bottom Line—on low-income students’ college enrollment and persistence. Bottom Line, which operates programs in Boston and Worcester, Massachusetts, provides advising throughout the senior year of high school. Its advisors meet individually with students to develop lists of well-matched colleges and universities to which they can apply. Advisors help students complete their college and financial aid applications and, once students have received acceptances, assist students in choosing which college to attend. A somewhat unique feature of the Bottom Line model is its emphasis on encouraging students to apply to and attend a set of twenty or so target colleges and universities. Bottom Line has identified these schools as institutions where students have a similar probability of graduating as at other commonly attended institutions, while facing lower average net costs without incurring excessive loan debt. For instance, one of the target institutions, Framingham State University, a public four-year university, has a 51 percent six-year graduation rate and an average net price of $17,552. For students who enroll at one of the target institutions, Bottom Line continues to provide individualized, campus-based support for up to six years following high school. Bottom Line also discourages students from attending institutions where prior cohorts of students have either struggled to graduate or have accumulated substantial debt. An example of a discouraged institution is Curry College, a private four-year university where the graduation rate is 44 percent and the average net price is$30,561. Bottom Line thus strives to affect not only whether students enroll in college but where they enroll as well. We exploit the fact that Bottom Line admits applicants partly on the basis of a minimum grade point average (GPA) requirement, a requirement not extensively publicized by the organization and which empirical evidence suggests students are not aware of. We implement a regression discontinuity design comparing students just above and below this threshold and find that counseling successfully shifts enrollment toward the four-year colleges encouraged by Bottom Line, which are largely public and substantially less expensive than alternatives students would otherwise choose. We also find evidence that counseling improves persistence through at least the second year of college, suggesting potential to increase the degree completion rates of disadvantaged students. We organize the remainder of the paper as follows. In section 2, we discuss Bottom Line and its college counseling programs. In section 3, we describe our data and empirical strategy. In section 4, we present our results. In section 5, we conclude with a discussion of these findings and their implications for policy, practice, and further research. ## 2.  Bottom Line Bottom Line was founded in Boston, Massachusetts, in 1997 and provides support to students who attend a variety of public and charter high schools in Boston and Worcester.2 It offers two types of services: an Access Program that helps students enroll in college and a Success Program that helps students persist in commonly attended regional colleges. Students apply to the Access Program during the second half of their junior year of high school. Bottom Line works extensively with schools and community-based organizations in each city to promote the program and to encourage students to apply. Bottom Line collects a substantial amount of self-reported academic and demographic information from students, but admissions decisions to the Access program are based primarily on students’ family income, first-generation college-going status, and cumulative GPA as of junior year in high school. Once students complete the initial Bottom Line application, Bottom Line staff review the applications and determine whether, based on students’ self-reported information, they appear to meet the family income and GPA requirements for admission to the program. Bottom Line targets students who make less than 200 percent of the federal poverty guidelines and whose high school GPA is 2.5 or higher. The latter requirement is to ensure that students are academically ready for college-level work. Students who appear to meet these thresholds are invited to bring copies of their parents’ tax returns and their high school transcripts to verify their income and GPA. Upon confirmation of eligibility, Bottom Line officially admits students to the program. Bottom Line starts working with students admitted to the Access program between the end of their junior year and the start of their senior year of high school. Each student is assigned to a counselor employed full-time by Bottom Line and, by senior year, meets with that counselor for an hour every two to three weeks during the application season. These meetings take place outside of school at the Bottom Line offices. Bottom Line advisors do not directly collaborate with students’ school counselors, but do interact with students’ parents as needed—for example, around financial aid forms. The counselors help seniors navigate the college application process by assisting them with creating lists of potential schools, writing essays, completing applications, applying for financial aid, searching for scholarships, resolving any problems that arise and, finally, selecting a suitable college. What differentiates Bottom Line from school-based college counseling and from other programs in the community is its intensive focus on college choice and affordability—helping students find affordable colleges where they can succeed, in part by encouraging students to consider colleges and universities where prior cohorts have been successful. Much of Bottom Line advisors’ time with students during the fall semester is spent working on college list formation. Advisors actively work with students to identify schools where they appear to be a good academic match based on their high school record and where they are likely to face manageable costs net of financial aid they receive. In the spring semester Bottom Line advisors help students complete financial aid applications and actively work with students to interpret financial aid award letters they receive from colleges to which they have been admitted, with the goal of helping students make informed financial choices about where they choose to enroll. At the end of senior year, students in the Access program are invited to continue into the Success program if they plan to attend one of the roughly twenty colleges and universities where Bottom Line provides ongoing campus-based support to students. Within a given cohort of Access seniors, approximately 70 percent choose to attend one of these “encouraged” colleges, and only a small percentage of these choose not to continue in the Success program. Bottom Line selected these institutions as ones to encourage student enrollment based on where early participants in the Access program had the greatest track records of persistence and success without incurring substantial debt. As mentioned above, Bottom Line also discourages students from attending institutions where prior cohorts of students have struggled to succeed or where students had to assume substantial debt to fund the cost of attendance. Appendix table A.1 shows the list of encouraged and discouraged colleges. Data from the Integrated Postsecondary Education Data System (IPEDS) suggests that the average encouraged college's six-year graduation rate is 66 percent, compared with 40 percent for the discouraged colleges. Encouraged colleges are also substantially less expensive, charging an average tuition of $23,500 annually compared with$29,800 for the discouraged colleges. Encouraged colleges are split between public and private institutions and are relatively large, whereas discouraged colleges are all private and relatively small. Through the Success Program, Bottom Line first provides transitional programming each summer for rising college students, discussing how to read a college syllabus or what to expect from life on a college campus, among other topics. College students are then advised and mentored on campus for up to six years by Bottom Line counselors to ensure that students have the support they need to earn a degree. First-year students meet with Bottom Line counselors approximately three to four times per semester, and older students meet with a counselor twice a semester on average. The support focuses on academic, financial, career, and personal challenges. ## 3.  Data and Empirical Strategy ### Data Data for this analysis come from Bottom Line, from the Massachusetts Department of Elementary and Secondary Education (DESE), and from IPEDS. Bottom Line's data include all information it receives from students during their application process, as well as data it generates during its selection process. We know each applicant's full name, high school, and high school class, and a small number of other self-reported characteristics, including GPA and family size.3 Using each applicant's name, high school, and class, we merge Bottom Line's data to DESE's administrative data on all Massachusetts public school students. Our match rate exceeds 93 percent. Of the 7 percent of students who are unmatched, half are enrolled in private schools or are missing school information entirely. The other half either have common names, which prevents us from uniquely identifying them in DESE's data, or have names that do not match DESE's records, either due to misspelling, use of nicknames or other use of non-legal names. We verify both that unmatched applicants look demographically quite similar to the sample as a whole and that match rates are unrelated to treatment status. We are thus unconcerned that our subsequent results are confounded by the small number of unmatched applicants. DESE's data contain demographic characteristics such as gender, race, and low-income status, as well as indicators for various educational designations, such as English as a second language, limited English proficiency, and special and vocational education status. DESE has also merged data on its high school students with National Student Clearinghouse (NSC) data that tracks college enrollment throughout the United States. Research by Dynarski, Hemelt, and Hyman (2015) suggests that NSC coverage rates are around 95 percent for recent Massachusetts cohorts, implying that nearly all college enrollment of our applicants should be captured by these data. In particular, other than colleges that specialize in theology, art, music, or law, every public and private four-year college in Massachusetts that is listed in IPEDS also appears in the NSC during the time period we are studying. The NSC identifies which, if any, college a student is enrolled in at any moment in time. It also identifies whether colleges are four-year or two-year, and public or private. We supplement this with data from IPEDS that measure for each college institutional characteristics such as six-year graduation rates and average net prices paid by enrolled students. We limit the analysis sample to students with valid self-reported GPAs between 1.0 and 4.0 in order to exclude a small number of cases with extreme values far from the eligibility threshold of 2.5. The resulting sample, shown in the first column of table 1, consists of the nearly 5,000 Bottom Line applicants from the high school classes of 2010 through 2014 who had GPAs between 1.0 and 4.0 and who were successfully merged to DESE's data. Sample characteristics are shown in panel A. Four-fifths are low-income students, as measured by receipt of subsidized lunch. Over two-thirds are black or Hispanic and a similar proportion are female. Over half speak a language at home other than English. We refer to such students as English as a second language (ESL) students. Panel B shows that the average GPA of a Bottom Line applicant is 3.08. During this period, Bottom Line accepted 55 percent of its applicants for counseling. Table 1. Summary Statistics (1) 2010—14(2) 2010—12 Panel A: Demographics Low income 0.80 0.78 Black 0.41 0.40 Hispanic 0.28 0.30 Asian 0.21 0.19 White 0.08 0.09 Female 0.68 0.69 ESL 0.54 0.51 Boston site 0.82 0.78 Panel B: Treatment Variables GPA 3.08 3.06 Counseled by Bottom Line 0.55 0.58 Panel C: College Enrollment Four-year college 0.66 0.64 Two-year college 0.11 0.10 Encouraged college 0.50 0.47 Discouraged college 0.04 0.04 N 4,992 2,730 (1) 2010—14(2) 2010—12 Panel A: Demographics Low income 0.80 0.78 Black 0.41 0.40 Hispanic 0.28 0.30 Asian 0.21 0.19 White 0.08 0.09 Female 0.68 0.69 ESL 0.54 0.51 Boston site 0.82 0.78 Panel B: Treatment Variables GPA 3.08 3.06 Counseled by Bottom Line 0.55 0.58 Panel C: College Enrollment Four-year college 0.66 0.64 Two-year college 0.11 0.10 Encouraged college 0.50 0.47 Discouraged college 0.04 0.04 N 4,992 2,730 Notes: Mean values of selected variables are shown for Bottom Line applicants whose GPA is between 1.0 and 4.0. Columns 1 and 2, respectively, contain the high school classes of 2010—14 and 2010—12. Panel C shows college enrollment outcomes in the fall following high school graduation. College enrollment outcomes in the fall immediately following high school graduation are shown in panel C.4 Given their family backgrounds, these students have high rates of enrollment, with 66 percent enrolling in four-year colleges and another 11 percent enrolling in two-year colleges. Three-fourths of the students who enroll in a four-year college do so in one of the institutions encouraged by Bottom Line. Only 4 percent enroll in one of the colleges discouraged by the organization. Our initial analysis will focus on these immediate college enrollment outcomes for the five cohorts represented in our data. We later limit the sample to the first three cohorts, the high school classes of 2010 through 2012, for whom we can observe enrollment spells for at least three academic years following high school graduation. We use such observations to measure persistence in college for these earliest three cohorts. The second column of table 2 shows that those first three cohorts are quite similar to the sample as a whole. Table 2. First Stage Impact of GPA Eligibility on Counseling (1) Counseled by Bottom Line(2) Only Access Program(3) Success + Access Program(4) Counseled, No Controls(5) Counseled, Donut Hole(5) Counseled, Bandwidth of 1.0 Panel A: 2010—14 Cohorts Eligible 0.249*** 0.085*** 0.163*** 0.248*** 0.291*** 0.185*** (0.044) (0.019) (0.033) (0.043) (0.045) (0.046) Control mean 0.22 0.12 0.09 0.22 0.22 0.22 N 4,992 4,992 4,992 4,992 4,546 3,780 Panel B: 2010—12 Cohorts Eligible 0.301*** 0.134*** 0.166*** 0.293*** 0.372*** 0.266*** (0.045) (0.031) (0.027) (0.043) (0.037) (0.052) Control mean 0.21 0.18 0.04 0.21 0.21 0.21 N 2,730 2,730 2,730 2,730 2,459 2,100 (1) Counseled by Bottom Line(2) Only Access Program(3) Success + Access Program(4) Counseled, No Controls(5) Counseled, Donut Hole(5) Counseled, Bandwidth of 1.0 Panel A: 2010—14 Cohorts Eligible 0.249*** 0.085*** 0.163*** 0.248*** 0.291*** 0.185*** (0.044) (0.019) (0.033) (0.043) (0.045) (0.046) Control mean 0.22 0.12 0.09 0.22 0.22 0.22 N 4,992 4,992 4,992 4,992 4,546 3,780 Panel B: 2010—12 Cohorts Eligible 0.301*** 0.134*** 0.166*** 0.293*** 0.372*** 0.266*** (0.045) (0.031) (0.027) (0.043) (0.037) (0.052) Control mean 0.21 0.18 0.04 0.21 0.21 0.21 N 2,730 2,730 2,730 2,730 2,459 2,100 Notes: Robust standard errors clustered by distance from the GPA threshold are in parentheses. Coefficients in columns 1—3 come from regressions of the listed outcome on an indicator for GPA eligibility, distance from the GPA threshold, the interaction of those two, a Worcester site indicator, high school class fixed effects and the set of demographic controls shown in table A.2, using a bandwidth of 1.5 GPA points. Panel A includes the high school classes of 2010—14, and panel B includes the classes of 2010—12. The outcome in column 1 is an indicator for being counseled by Bottom Line. Columns 2 and 3 separate this treatment status into Bottom Line's two programs, Access and Success. Also listed is the mean value of each outcome for students with GPAs between 2.3 and 2.5. Columns 4—6 replicate column 1, respectively, removing the demographic controls, excluding observations less than 0.1 GPA point from the threshold, and limiting the bandwidth to 1.0 GPA point. ***p < 0.01. ### Empirical Strategy Whether Bottom Line is partly responsible for the high observed college enrollment rates is one key question of interest here. Evaluating the impact of college counseling is generally difficult because the quantity and quality of guidance available to a given student is correlated with numerous other determinants of enrollment and persistence, including school quality, parental involvement, and the student's own aspirations. We address this challenge by exploiting the fact that, as part of its selection process, Bottom Line uses a GPA threshold of 2.5 as one criterion for determining which students are eligible for its services. It uses this threshold to help identify students whose high school transcripts suggest they have the potential to succeed in a four-year college. We use this GPA threshold to implement a regression discontinuity design (RD) that compares the college outcomes of students just above and below that threshold. Such students should be nearly identical in terms of academic skills, as measured by GPA, as well as other characteristics, a fact we verify empirically below. They should differ only in their access to the college counseling services provided by Bottom Line. We generate our estimates of the impact of intensive college counseling in the following way. The reduced form version of our baseline specification is a local linear regression of the form 1 Here, College measures various college outcomes for student i in high school s and graduating class c. Eligible indicates whether that student is above the GPA eligibility threshold, GPA measures his distance from the threshold in GPA points, and EligibleGPA is the interaction of those two variables. The two GPA variables model the relationship between GPA and college outcomes as linear, allowing that slope to vary on either side of the threshold. The coefficient on Eligible thus measures the difference in college outcomes between students just above and just below that threshold. Graduating class fixed effects control for year-specific differences in college outcomes that affect all students similarly. Student-level controls X include indicators for gender, race, low income status, ESL status, limited English proficiency status, vocational education status, and special education status, as well as an indicator for whether the student is at Bottom Line's Boston or Worcester site. Bottom Line rejected some students above the GPA threshold and accepted others below it. As a result, the coefficients from the reduced form specification generate intent-to-treat estimates of the impact of increased eligibility for counseling on college outcomes. We are, however, interested in the impact of counseling itself. We therefore present estimates from a fuzzy RD in which we instrument the probability of treatment with GPA eligibility. Our first stage regression has the form 2 where Counseled indicates acceptance into the Bottom Line program.5 We then estimate treatment impacts by running regressions of the form 3 where students’ engagement with counseling has been instrumented using the first stage equation above. The counseling coefficient thus estimates a local average treatment effect for students granted access to Bottom Line's program because of GPA eligibility. Following Lee and Card (2008), our baseline specification for these instrumental variables estimates clusters standard errors by distance from the GPA threshold because GPA is a fairly discrete variable, with well over half of students reporting values that are multiples of 0.1. Because of the relatively small sample size, we use as a default a bandwidth of 1.5 GPA points, including GPAs of 1.0 to 4.0, which captures all but the lowest and highest GPAs. We show later that our results are robust to using a smaller bandwidth of 1.0, which corresponds closely to the optimal bandwidth suggested by Imbens and Kalyanaraman (2012), though precision decreases given the sample size. Validity of our RD estimates requires that students not systematically manipulate on which side of the GPA threshold they fall. Such a problem would arise if students, in order to participate in Bottom Line, inflated their GPAs because of knowledge of the GPA admissions threshold.6 Another potential problem would arise if knowledge of the threshold differentially affected across that threshold the number or type of student choosing to apply to Bottom Line. Although conversations with the organization suggest that the GPA threshold was not widely publicized to students, we have little actual evidence about the extent of students’ awareness of it. As such, we cannot rule out the potential for bias due to manipulation or selection. We can, however, test whether the density of students just above the threshold looks similar to the density just below the threshold, as suggested by McCrary (2008). Such tests show no evidence that GPAs just above 2.5 are over-represented relative to GPAs just below 2.5, suggesting no obvious manipulation by students. Appendix figure A.1 shows the distribution of GPAs graphically. Low GPAs are less common than high ones and multiples of 0.25 are particularly common, but there is no obvious difference in the distribution of GPAs around the eligibility threshold than around other multiples of 0.25. To rule out the possibility that students below the threshold report GPAs of 2.5 in order to qualify for counseling, we show that our results are robust to the exclusion of students with 0.1 GPA points of the bandwidth, a so-called “donut hole” regression discontinuity. We also confirm that nearly all observable covariates are balanced across the threshold by running our reduced form specification using such covariates as outcomes. Table A.2 shows the results of these covariate balance tests, with panel A including all five cohorts and panel B including the earliest three cohorts. In each case, of the ten variables tested, nine show little clear imbalance across the threshold and the remaining one is likely due to chance. The magnitudes of any covariate imbalances differences are small enough that controlling for such covariates has nearly no effect on our estimated impacts, as we show later in our robustness checks. The balance of density and covariates at the threshold suggest that students on either side of the threshold are similar along both observable and unobservable dimensions. Our RD coefficients should therefore provide unbiased estimates of the impact of intensive counseling on college outcomes. ## 4.  College Enrollment and Persistence ### First Stage Results The GPA threshold provides a substantial source of exogenous variation in the probability that a given student is counseled by Bottom Line. Figure 1, which graphs the relationship between treatment probability and GPA, shows a clear discontinuity at the threshold. Table 2 presents regression-based estimates of that first stage relationship. For the full five cohorts, students just above the threshold are 25 percentage points more likely to receive counseling from Bottom Line than students just below the threshold. This represents roughly a doubling in treatment probability across the threshold. The F-statistic associated with that coefficient exceeds 30, well above the value of 10 suggested by Staiger and Stock (1997) to rule out a weak instrument. For the earliest three cohorts, GPA eligibility also provides a strong instrument, raising treatment probability by 30 percentage points. Figure 1. First Stage Relationship between GPA and Intensive College Counseling. Figure 1. First Stage Relationship between GPA and Intensive College Counseling. The coefficients in column 1 will serve as our first-stage estimates for subsequent instrumental variables analyses. We show in columns 2 and 3 that the increase in treatment probability comes from an increase in both the Access program, which focuses on the application and initial enrollment process, and the Success program, which continues to counsel students after they enroll at Bottom Line's encouraged colleges. Program choice within Bottom Line is itself likely endogenous to the initial counseling process because only students who enroll at encouraged colleges are eligible for the Success program. Nonetheless, we show these estimates to highlight that the counseling treatment studied here is really the combination of two programs, one of which emphasizes initial enrollment and the other of which emphasizes persistence. The last three columns of table 2 show that the magnitude of our first stage estimates is unchanged by exclusion of demographic controls, grows slightly in the donut hole specification excluding students immediately on the threshold, and shrinks slightly but remains a strong instrument if the bandwidth is reduced to 1.0 GPA point. Our source of exogenous variation in the probability of receiving intensive college counseling is thus relatively insensitive to empirical specification. ### Initial Enrollment Impacts Figure 2 shows the reduced-form relationship between GPA and enrollment in one of Bottom Line's encouraged colleges. The visually apparent discontinuity implies that Bottom Line is inducing substantial numbers of students to enroll in such colleges. We confirm this in table 3, which shows instrumental variable estimates of the impact of Bottom Line's counseling treatment on various college enrollment outcomes as measured in the fall immediately following high school graduation. Below each coefficient is the control complier mean, computed as suggested by Abadie, Angrist, and Imbens (2002). Abadie (2003) measures the expected value of the outcome variable for untreated compliers, those who would have received counseling if not for being disqualified by their GPA. Table 3. Impact of Counseling on Initial College Choice (1) Bandwidth = 1.5, With Controls(2) Bandwidth = 1.5, No Controls(3) Bandwidth = 1.5, Donut Hole(4) Bandwideth = 1.0, With Controls Encouraged college 0.515*** 0.506*** 0.689*** 0.463** (0.133) (0.136) (0.131) (0.215) CCM 0.22 0.22 0.07 0.27 Discouraged college −0.226*** −0.224*** −0.262*** −0.315*** (0.068) (0.068) (0.074) (0.111) CCM 0.26 0.26 0.31 0.36 Two-year college −0.259** −0.246* −0.198* −0.170 (0.122) (0.131) (0.114) (0.212) CCM 0.32 0.31 0.30 0.24 Four-year college 0.202 0.181 0.214* 0.138 (0.128) (0.133) (0.124) (0.222) CCM 0.55 0.56 0.51 0.63 Any college −0.057 −0.066 0.016 −0.033 (0.157) (0.152) (0.152) (0.238) CCM 0.87 0.87 0.80 0.88 Lower cost four-year college 0.388*** 0.374*** 0.360*** 0.318* (0.112) (0.114) (0.105) (0.177) CCM 0.23 0.23 0.24 0.30 High graduation rate four-year college 0.209* 0.187 0.218* 0.216 (0.121) (0.127) (0.119) (0.200) CCM 0.27 0.28 0.19 0.29 N 4,992 4,992 4,546 3,780 (1) Bandwidth = 1.5, With Controls(2) Bandwidth = 1.5, No Controls(3) Bandwidth = 1.5, Donut Hole(4) Bandwideth = 1.0, With Controls Encouraged college 0.515*** 0.506*** 0.689*** 0.463** (0.133) (0.136) (0.131) (0.215) CCM 0.22 0.22 0.07 0.27 Discouraged college −0.226*** −0.224*** −0.262*** −0.315*** (0.068) (0.068) (0.074) (0.111) CCM 0.26 0.26 0.31 0.36 Two-year college −0.259** −0.246* −0.198* −0.170 (0.122) (0.131) (0.114) (0.212) CCM 0.32 0.31 0.30 0.24 Four-year college 0.202 0.181 0.214* 0.138 (0.128) (0.133) (0.124) (0.222) CCM 0.55 0.56 0.51 0.63 Any college −0.057 −0.066 0.016 −0.033 (0.157) (0.152) (0.152) (0.238) CCM 0.87 0.87 0.80 0.88 Lower cost four-year college 0.388*** 0.374*** 0.360*** 0.318* (0.112) (0.114) (0.105) (0.177) CCM 0.23 0.23 0.24 0.30 High graduation rate four-year college 0.209* 0.187 0.218* 0.216 (0.121) (0.127) (0.119) (0.200) CCM 0.27 0.28 0.19 0.29 N 4,992 4,992 4,546 3,780 Notes: Robust standard errors clustered by distance from the GPA threshold are in parentheses. Coefficients come from regressions of the listed outcome on an indicator for Bottom Line counseling, where counseling has been instrumented with GPA eligibility as described in the text. The sample includes the high school classes of 2010—14. Outcomes are indicators for college enrollment in the fall immediately following high school graduation. Lower-cost colleges are those that the 2013 IPEDS lists as having average net price for aid-receiving students below $25,000. High-graduation rate colleges are those that the 2013 IPEDS lists as having six-year degree completion rates of at least 50 percent. Column 1 uses a bandwidth of 1.5 GPA points and includes the demographic controls listed in table A.2. Columns 2—4 replicate column 1, respectively removing the demographic controls, excluding observations less than 0.1 GPA point from the threshold, and limiting the bandwidth to 1.0 GPA point. Also listed is the control complier mean (CCM). *p < 0.10; **p < 0.05; ***p < 0.01. Figure 2. Enrollment at Encouraged College. Figure 2. Enrollment at Encouraged College. Column 1 shows that treated students are 52 percentage points more likely to enroll in one of Bottom Line's encouraged colleges, relative to an enrollment rate at these colleges of 22 percent for compliers just below the GPA threshold. Treatment also lowers the probability of enrolling in one of the discouraged colleges by 23 percentage points from a 26-percentage point baseline, suggesting that Bottom Line successfully discourages nearly all of its participants from choosing one of those colleges. Both of these estimates are statistically significant across all tested specifications, suggesting that Bottom Line is successful at directing students’ enrollment behavior in the way it intends. Counseling also lowers the probability of enrolling in a two-year college by a statistically significant 26 percentage points. Enrollment in four-year colleges rises by 20 percentage points, though this estimate is marginally statistically insignificant in only one of our four specifications. There is thus suggestive, though not conclusive, evidence that counseling causes some students to change the type of college they attend, from two-year to four-year. The two-year and four-year estimates offset each other, so that counseling appears to have little impact on the overall probability of college enrollment. Overall, we see clear evidence that Bottom Line's intensive college guidance effectively shifts students’ enrollment away from two-year, or discouraged four-year, colleges, and toward four-year colleges the organization believes will be more successful at graduating those students. One result of such shifting is that, conditional on enrolling in a four-year college, counseled students choose colleges with lower average net prices and perhaps higher graduation rates. In the penultimate row, we define a “lower cost” indicator for whether immediately after high school graduation a student enrolls in a four-year college with an average net price for aid-receiving students of under$25,000, thus avoiding some of the colleges where students are most likely to incur large amounts of debt. In the last row of table 3, we define a “high graduate rate” indicator for whether immediately after high school graduation a student enrolls in a four-year college with a six-year bachelor's degree completion rate of at least 50 percent. Counseling substantially and clearly increases the probability that a student enrolls in a lower cost four-year college, the reduced form graphical version of which is shown in figure 3. We also see suggestive evidence that counseling makes students more likely to choose high-graduation-rate colleges. Counseling thus appears to guide students toward colleges where they will likely incur less debt and will be more likely to graduate, as Bottom Line intends. Figure 3. Enrollment in Lower Cost Four-Year Colleges. Figure 3. Enrollment in Lower Cost Four-Year Colleges. ### Persistence Impacts Initial college enrollment is not the only outcome of interest, particularly given that many students who enroll in college do not persist and thus fail to complete their degrees. Bottom Line's Success program, which supports students throughout their time at encouraged colleges, is designed specifically to improve persistence. We can measure persistence through three years of college for the first three cohorts of students we observe, the high school classes of 2010–12.7 To estimate persistence effects, we measure for these three cohorts the impact of counseling on four-year college enrollment in the fall of the first year, the spring of the second year, and the spring of the third year after high school graduation. We also measure the total number of fall and spring semesters in which a student has been enrolled in a four-year college by the spring of his third year, as well as the probability that he has enrolled continuously in all such semesters to date.8 Table 4 shows the result of such analysis. The point estimates in the first row suggest that, for these earliest three cohorts, counseling increases four-year college enrollment rates in the fall immediately following high school graduation by a statistically insignificant 10 percentage points or so. By the spring of their second year, counseled students are a statistically significant 26 percentage points more likely to be enrolled in a four-year college. They have enrolled in about 0.7 more semesters of four-year college by that time and are 27 percentage points more likely to have been enrolled for all four semesters to date. The latter result represents roughly a doubling of the baseline probability of such continuous enrollment. A graphical version of that result is shown in figure 4. Table 4. Impact of Counseling on Persistence in Four-Year Colleges (1) Bandwidth = 1.5, With Controls(2) Bandwidth = 1.5, No Controls(3) Bandwidth = 1.5, Donut Hole(4) Bandwidth = 1.0, With Controls Enrolled in four-year college, fall year 1 0.126 0.093 0.136 0.083 (0.138) (0.150) (0.119) (0.193) CCM 0.56 0.58 0.53 0.63 Enrolled in four-year college as of spring year 2 0.256** 0.223* 0.206** 0.266* (0.103) (0.114) (0.089) (0.147) CCM 0.33 0.34 0.33 0.36 Total semesters enrolled through spring year 2 0.771* 0.646 0.705* 0.656 (0.453) (0.503) (0.385) (0.657) CCM 1.72 1.78 1.68 1.94 Enrolled in all semesters through spring year 2 0.274** 0.239** 0.231** 0.290* (0.110) (0.114) (0.093) (0.163) CCM 0.25 0.26 0.26 0.26 Enrolled in four-year college as of spring year 3 0.186 0.163 0.194 0.197 (0.142) (0.137) (0.140) (0.188) CCM 0.29 0.29 0.26 0.33 Total semesters enrolled through spring year 3 1.165* 0.990 1.061* 1.083 (0.651) (0.699) (0.582) (0.946) CCM 2.32 2.39 2.28 2.60 Enrolled in all semesters through spring year 3 0.181 0.157 0.185* 0.231 (0.127) (0.128) (0.106) (0.183) CCM 0.24 0.24 0.21 0.26 N 2,730 2,730 2,459 2,100 (1) Bandwidth = 1.5, With Controls(2) Bandwidth = 1.5, No Controls(3) Bandwidth = 1.5, Donut Hole(4) Bandwidth = 1.0, With Controls Enrolled in four-year college, fall year 1 0.126 0.093 0.136 0.083 (0.138) (0.150) (0.119) (0.193) CCM 0.56 0.58 0.53 0.63 Enrolled in four-year college as of spring year 2 0.256** 0.223* 0.206** 0.266* (0.103) (0.114) (0.089) (0.147) CCM 0.33 0.34 0.33 0.36 Total semesters enrolled through spring year 2 0.771* 0.646 0.705* 0.656 (0.453) (0.503) (0.385) (0.657) CCM 1.72 1.78 1.68 1.94 Enrolled in all semesters through spring year 2 0.274** 0.239** 0.231** 0.290* (0.110) (0.114) (0.093) (0.163) CCM 0.25 0.26 0.26 0.26 Enrolled in four-year college as of spring year 3 0.186 0.163 0.194 0.197 (0.142) (0.137) (0.140) (0.188) CCM 0.29 0.29 0.26 0.33 Total semesters enrolled through spring year 3 1.165* 0.990 1.061* 1.083 (0.651) (0.699) (0.582) (0.946) CCM 2.32 2.39 2.28 2.60 Enrolled in all semesters through spring year 3 0.181 0.157 0.185* 0.231 (0.127) (0.128) (0.106) (0.183) CCM 0.24 0.24 0.21 0.26 N 2,730 2,730 2,459 2,100 Notes: Robust standard errors clustered by distance from the GPA threshold are in parentheses. Coefficients come from regressions of the listed outcome on an indicator for Bottom Line counseling, where counseling has been instrumented with GPA eligibility as described in the text. The sample includes the high school classes of 2010—12. The first row's outcome is an indicator for four-year college enrollment in the fall immediately after high school graduation. The second row's outcome is an indicator for four-year college enrollment in the spring two years after high school graduation. The third row measures the total number of semesters enrolled in four-year colleges by that second spring, and the fourth row uses an indicator for four semesters of such enrollment by that second spring. The remaining three rows repeat those outcomes but for the spring of the third year following high school graduation. Column 1 uses a bandwidth of 1.5 GPA points and includes the demographic controls listed in table A.2. Columns 2—4 replicate column 1, respectively removing the demographic controls, excluding observations less than 0.1 GPA point from the threshold, and limiting the bandwidth to 1.0 GPA point. Also listed is the control complier mean (CCM). *p < 0.10; **p < 0.05. Figure 4. Persistence at Four-Year Colleges. Figure 4. Persistence at Four-Year Colleges. Nearly all of these persistence results by the spring of the second year after high school graduation are marginally or statistically significant. By the spring of students’ third year, we still observe positive impacts on these various persistence measures, although only a few are marginally statistically significant. These results provide clear evidence that counseling improves overall persistence in four-year colleges through the end of students’ second year, as well as suggestive evidence of improvement through the third year. ## 5.  Discussion and Conclusion Improving college access and success for economically disadvantaged students has emerged as a top policy priority at the federal level. Much attention has been devoted to low-cost, easily scaled strategies to improve college entry and success for lower-income students. These informational and behavioral strategies have generated positive impacts for high-achieving students and for students who have already completed several key stages in the application process. It is an open question, however, whether these low-touch interventions would be similarly effective for students lower in the academic distribution or for students who are not as far along in the college process. In the absence of this evidence, many communities still provide intensive college-advising to help high school juniors and seniors through the college and financial aid application process. Bottom Line is one such model, with a particular focus on guiding students to enroll at colleges and universities where the program believes students are well-positioned to graduate without incurring substantial debt. Our results show clear evidence that such intensive college counseling influences students’ college choices, with counseled students substantially more likely to enroll in colleges encouraged by the program. Counseling thus shifts students toward four-year colleges that are substantially less costly than ones they otherwise would have chosen. By helping students enroll and persist at institutions where they are equally likely to succeed but at substantially lower average cost, Bottom Line may reduce the financial burden students incur in pursuing a college degree. Given substantial policy attention to rising loan default rates and the negative impacts that loan repayments can have on asset accumulation and other outcomes, this is an encouraging finding. It also suggests that other college access programs may want to focus not only on increasing enrollment rates but also on shifting students toward colleges with better characteristics, such as cost and graduation rates. It may be easier to change the college choices of students on the intensive margin (choosing which college to attend) than the extensive margin (choosing whether to attend). There is also suggestive, though not conclusive, evidence that counseling shifts some students from the two-year sector and into the four-year sector. Importantly, we also see suggestive evidence of increased persistence in four-year colleges after three years. This suggests that intensive counseling alters not only initial college enrollment but also subsequent longer-run outcomes critical to evaluating the efficacy of such programs. There is a growing body of evidence that suggests the choice of where to enroll for lower-income or first-generation students can have important consequences for their longer-term success. Recent research demonstrates that students who are just above an admissions threshold for a public four-year university have substantially higher bachelor's degree completion rates than students just below the threshold who have access to community colleges instead (Goodman, Hurwitz, and Smith 2017). There has been a dramatic increase in loan default rates among borrowers at for-profit institutions and community colleges, which enroll a substantial share of the lower-income and first-generation student populations (Dynarski 2014). Employers appear to assign less value to degrees from for-profit institutions than they do degrees from less-selective public institutions, and labor market returns to a for-profit degree are also lower than for degrees from other institutions (Cellini and Turner 2016; Deming et al. 2016). Thus, Bottom Line's impacts on low-income students’ college choices have the potential to generate lasting positive impacts further down the road. Moreover, Bottom Line's ability to encourage, through advising, students to attend institutions with comparable quality but lower net costs can also inform ongoing state and federal efforts, such as the recently updated College Scorecard, to promote more informed consumer choice about higher education. One potential conclusion is that the Department of Education should invest in resources to proactively reach out to students and families about the information contained in the Scorecard, especially with newly available information about indebtedness and earnings, to help them make informed choices about where they apply and attend. One question that our research cannot definitively address is the channel through which intensive college counseling affects college persistence. Counseling during high school affects college choice and affordability, which may be sufficient to explain the observed persistence results. Many of the treated students continue, however, to receive counseling while enrolled at college, a key feature of Bottom Line's model. We cannot identify whether increased college affordability or continuing support while on campus, or some combination thereof, explains increased persistence. An open question is whether Bottom Line's impact on where students enroll and whether they persist are sufficient to justify the program's cost of approximately $5,000 per student served, given that the Access program costs about$1,400 per student and the Success program costs about $1,000 for each year a student is in college. Our estimates suggest increased four-year college enrollment and persistence rates on the order of 20 percentage points, which in turn suggests a cost of roughly$25,000 per additional college enrollee or persister. Cohodes and Goodman (2014) estimate that, in Massachusetts, the net present value of a college degree relative to some college without a degree is roughly $1,000,000. If Bottom Line's persistence impacts translate into completion impacts, then this treatment raises students’ lifetime earnings by roughly$200,000, far outweighing the costs of the intervention. Nonetheless, it is important to ask whether the financial resources currently allocated to programs like Bottom Line could be more effectively allocated to other policy strategies for improving college access and success. Dynarski, Hyman, and Schanzenbach (2013) note that class size reductions in the Tennessee STAR experiment cost $400,000 per additional college enrollee, while Upward Bound's bundle of treatments costs over$90,000 per additional enrollee. Bottom Line's costs look quite favorable relative to these interventions. Conversely, the Free Application for Federal Student Aid (FAFSA) completion assistance program studied in Bettinger et al. (2012) costs only $1,100 per additional college enrollee, and the peer mentoring programs studied in Carrell and Sacerdote (2013) cost only$2,400 per additional enrollee. The intensive counseling provided by Bottom Line falls somewhere in the middle of the cost distribution of this spectrum of interventions. It is also important to ask how scalable programs like Bottom Line are to other communities, both because of the costs per student served and because of the leadership and expertise required to advise students as comprehensively as Bottom Line does. One potential interesting area for further inquiry is whether intensive advising programs like Bottom Line can provide similar one-on-one guidance to students remotely, via interactive technologies like video chat, screen sharing, and document collaboration. This type of remote advising, if successful, could allow for economies of scale to lower the per student costs. The CollegePoint initiative, supported by Bloomberg Philanthropies, is currently investigating the efficacy of this remote advising approach. As a result of these considerations, we have begun collaborating with Bottom Line on the design of a long-term randomized controlled trial to more thoroughly evaluate the program's impact on students’ college trajectories. Starting with the graduating class of 2015, we are implementing a multi-cohort experiment across Bottom Line's Massachusetts and New York sites. This experiment will provide sufficient power to more precisely detect overall enrollment and persistence patterns, and the randomized design will allow us to investigate average treatment effects across the population of students who are eligible for the program. In addition, we are conducting surveys with students both while they are still in high school and once they have matriculated in college, in order to better investigate both the mechanisms through which Bottom Line is affecting students’ decisions and college outcomes, and to capture a more holistic set of measures for how Bottom Line is impacting students’ college choices and postsecondary experiences. We will follow students for six to eight years following high school graduation in order to investigate the program's effect on degree completion. The results of this experiment will better explore whether intensive advising programs like Bottom Line, which clearly impact the type of institution at which students enroll, justify the greater upfront resource investment. ## Acknowledgments We thank Greg Johnson and Andrew MacKenzie of Bottom Line for explaining how their counseling program works and for sharing data on applicants to their program. We thank Carrie Conaway of the Massachusetts Department of Elementary and Secondary Education for sharing state data on student outcomes. Napat Jatusripitak and Carlos Paez provided excellent research assistance. Joshua Goodman gratefully acknowledges support from the Taubman Center for State and Local Government and the Rappaport Institute for Greater Boston. All errors are our own. ## REFERENCES , Alberto . 2003 . Semiparametric instrumental variable estimation of treatment response models . Journal of Econometrics 113 ( 2 ): 231 263 . doi:10.1016/S0304-4076(02)00201-4. , Alberto , Joseph Angrist , and Guido Imbens . 2002 . Instrumental variables estimates of the effect of subsidized training on the quantiles of trainee earnings . Econometrica 70 ( 1 ): 91 117 . doi:10.1111/1468-0262.00270. Avery , Christopher . 2010 . The effects of college counseling on high-achieving, low-income students . NBER Working Paper No. 16359 . Avery , Christopher . 2013 . Evaluation of the college possible program: Results from a randomized controlled trial . NBER Working Paper No. 19562 . Avery , Christopher , and Thomas J. Kane . 2004 . Student perceptions of college opportunities. The Boston COACH program . In College choices: The economics of where to go, when to go, and how to pay for it , edited by Caroline Hoxby , pp. 355 394 . Chicago : University of Chicago Press . doi:10.7208/chicago/9780226355375.003.0009. Bailey , Martha J. , and Susan M. Dynarski . 2011 . Inequality in postsecondary attainment . In Whither opportunity: Rising inequality, schools, and children's life chances , edited by Greg Duncan and Richard Murnane , pp. 117 132 . New York : Russell Sage Foundation . Baum , Sandy , Jennifer Ma , and Kathleen Payea . 2013 . Education pays 2013: The benefits of higher education for individuals and society . New York : The College Board . Bettinger , Eric P. , Bridget T. Long , Philip Oreopoulos , and Lisa Sanbonmatsu . 2012 . The role of application assistance and information in college decisions: Results from the H&R Block FAFSA experiment . Quarterly Journal of Economics 127 ( 3 ): 1205–1242 . doi:10.1093/qje/qjs017. Bowen , William G. , Matthew M. Chingos , and Michael S. McPherson . 2009 . Crossing the finish line . Princeton, NJ : Princeton University Press . Carrell , Scott E. , and Mark Hoekstra . 2014 . Are school counselors a cost-effective education input ? Economics Letters 125 ( 1 ): 66 69 . doi:10.1016/j.econlet.2014.07.020. Carrell , Scott , and Bruce Sacerdote . 2013 . Late interventions matter too: The case of college coaching in New Hampshire . NBER Working Paper No. 19031 Castleman , Benjamin L. , and Lindsay C. Page . 2016 . Freshman year financial aid nudges: An experiment to increase FAFSA renewal and college persistence . Journal of Human Resources 51 ( 2 ): 389 415 . doi:10.3368/jhr.51.2.0614-6458R. Cellini , Stephanie Riegg , and Nicholas Turner . 2016 . Gainfully employed? Assessing the employment and earnings of for-profit college students using administrative data . NBER Working Paper No. 22287 . Cohodes , Sarah , and Joshua Goodman . 2014 . Merit aid, college quality and college completion: Massachusetts’ Adams Scholarship as an in-kind subsidy . American Economic Journal: Applied Economics 6 ( 4 ): 251 285 . doi:10.1257/app.6.4.251. Deming , David , Noam Yuchtman , Amira Abulafi , Claudia Goldin , and Lawrence Katz . 2016 . The value of postsecondary credentials in the labor market: An experimental study . American Economic Review 106 ( 3 ): 778 806 . doi:10.1257/aer.20141757. Dynarski , Susan M. 2014 . An economist's perspective on student loans in the United States . ES Working Paper Series, Brookings Institution . Dynarski , Susan M. , Steven W. Hemelt , and Joshua M. Hyman . 2015 . The missing manual: Using national student clearinghouse data to track postsecondary outcomes . Educational Evaluation and Policy Analysis 37 ( 1S ): 53S 79S . doi:10.3102/0162373715576078. Dynarski , Susan , Joshua Hyman , and Diane Whitmore Schanzenbach . 2013 . Experimental evidence on the effect of childhood investments on postsecondary attainment and degree completion . Journal of Policy Analysis and Management 32 ( 4 ): 692 717 . doi:10.1002/pam.21715. Goodman , Joshua , Michael Hurwitz , and Jonathan Smith . 2017 . . Journal of Labor Economics 35(3):829--867. Grodsky , Eric , and Melanie T. Jones . 2007 . Real and imagined barriers to college entry: Perceptions of cost . Social Science Research 36 ( 2 ): 745 766 . doi:10.1016/j.ssresearch.2006.05.001. Horn , Laura , Xianglei Chen , and Chris Chapman . 2003 . Getting ready to pay for college: What students and their parents know about the cost of college tuition and what they are doing to find out . Washington, DC : National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education . Hoxby , Caroline , and Christopher Avery . 2013 . The missing “one-offs”: The hidden supply of high-achieving, low income students . Available Accessed 1 March 2017 . Hoxby , Caroline , and Sarah Turner . 2013 . Expanding college opportunities for high-achieving, low income students . Stanford, CA : SIEPR Discussion Paper No. 12–014 . Hurwitz , Michael , and Jessica Howell . 2014 . Estimating causal impacts of school counselors with regression discontinuity designs . Journal of Counseling and Development 92 ( 3 ): 316 327 . doi:10.1002/j.1556-6676.2014.00159.x. Imbens , Guido , and Karthik Kalyanaraman . 2012 . Optimal bandwidth choice for the regression discontinuity estimator . Review of Economic Studies 79 ( 3 ): 933 959 . doi:10.1093/restud/rdr043. Lee , David S. , and David Card . 2008 . Regression discontinuity inference with specification error . Journal of Econometrics 142 ( 2 ): 655 674 . doi:10.1016/j.jeconom.2007.05.003. McCrary , Justin . 2008 . Manipulation of the running variable in the regression discontinuity design: A density test . Journal of Econometrics 142 ( 2 ): 698 714 . doi:10.1016/j.jeconom.2007.05.005. Reback , Randall . 2010 . Noninstructional spending improves noncognitive outcomes: Discontinuity evidence from a unique elementary school counselor financing system . Education Finance and Policy 5 ( 2 ): 105 137 . doi:10.1162/edfp.2010.5.2.5201. Seftor , Neil , Arif Mamun , and Allen Schirm . 2009 . The impacts of regular upward bound on postsecondary outcomes 7–9 years after scheduled high school graduation: Final report . Princeton, NJ : Mathematica Policy Research . Smith , Jonathan , Matea Pender , and Jessica Howell . 2013 . The full extent of academic undermatch . Economics of Education Review 32 : 247 261 . doi:10.1016/j.econedurev.2012.11.001. Staiger , Douglas , and James H. Stock . 1997 . Instrumental variables regression with weak instruments . Econometrica 65 ( 3 ): 557 586 . doi:10.2307/2171753. Figure A.1. GPA Distribution. Figure A.1. GPA Distribution. Table A.1. Characteristics of Bottom Line's Encouraged and Discouraged Colleges Panel A: Encouraged Colleges Bentley University Private nonprofit BA 4,172 29,886 41,110 0.87 0.16 Boston College Private nonprofit BA 9,465 33,070 45,622 0.91 0.14 Boston University Private nonprofit BA 16,460 34,603 44,880 0.84 0.15 Bridgewater State University Public BA 9,489 17,477 8,053 0.58 0.24 Clark University Private nonprofit BA 2,312 23,415 39,550 0.81 0.21 Fitchburg State University Public BA 4,148 12,849 8,985 0.50 0.34 Framingham State University Public BA 4,255 17,552 8,080 0.51 0.28 College of the Holy Cross Private nonprofit BA 2,878 32,118 44,272 0.91 0.16 University of Massachusetts-Lowell Public BA 11,830 16,351 12,097 0.54 0.30 University of Massachusetts-Amherst Public BA 21,672 19,087 13,443 0.73 0.25 University of Massachusetts-Boston Public BA 11,786 11,741 11,966 0.44 0.38 MCPHS University Private nonprofit BA 3,808 34,345 28,470 0.71 0.30 Massachusetts College of Liberal Arts Public BA 1,483 14,837 8,525 0.57 0.45 Northeastern University Private nonprofit BA 13,204 31,503 41,686 0.83 0.14 Quinsigamond Community College Public AA 7,647 6,510 5,094 0.45 Salem State University Public BA 7,134 15,420 8,130 0.46 0.33 University of Massachusetts-Dartmouth Public BA 7,202 17,092 11,681 0.49 0.38 Suffolk University Private nonprofit BA 5,535 27,507 31,716 0.55 0.29 Wentworth Institute of Technology Private nonprofit BA 3,975 31,201 29,200 0.63 0.28 Worcester Polytechnic Institute Private nonprofit BA 4,012 35,483 42,778 0.81 0.15 Worcester State University Public BA 5,033 14,402 8,157 0.49 0.27 Unweighted average   7,500 22,688 23,500 0.66 0.27 Panel B: Discouraged Colleges Bay State College Private for-profit AA 1,196 25,440 19,748 0.27 0.51 Becker College Private nonprofit BA 1,871 28,697 31,500 0.31 0.39 Curry College Private nonprofit BA 2,794 30,561 34,715 0.45 0.26 Fisher College Private nonprofit AA 1,733 30,797 27,575 0.39 0.58 Lasell College Private nonprofit BA 1,667 25,316 30,000 0.53 0.32 Mount Ida College Private nonprofit BA 1,261 27,772 29,377 0.40 0.39 Newbury College Private nonprofit BA 961 24,111 28,950 0.31 0.50 Nichols College Private nonprofit BA 1,260 26,673 32,370 0.47 0.35 Regis College Private nonprofit BA 1,188 37,829 34,380 0.49 0.36 Unweighted average   1,548 28,577 29,846 0.40 0.41 Panel A: Encouraged Colleges Bentley University Private nonprofit BA 4,172 29,886 41,110 0.87 0.16 Boston College Private nonprofit BA 9,465 33,070 45,622 0.91 0.14 Boston University Private nonprofit BA 16,460 34,603 44,880 0.84 0.15 Bridgewater State University Public BA 9,489 17,477 8,053 0.58 0.24 Clark University Private nonprofit BA 2,312 23,415 39,550 0.81 0.21 Fitchburg State University Public BA 4,148 12,849 8,985 0.50 0.34 Framingham State University Public BA 4,255 17,552 8,080 0.51 0.28 College of the Holy Cross Private nonprofit BA 2,878 32,118 44,272 0.91 0.16 University of Massachusetts-Lowell Public BA 11,830 16,351 12,097 0.54 0.30 University of Massachusetts-Amherst Public BA 21,672 19,087 13,443 0.73 0.25 University of Massachusetts-Boston Public BA 11,786 11,741 11,966 0.44 0.38 MCPHS University Private nonprofit BA 3,808 34,345 28,470 0.71 0.30 Massachusetts College of Liberal Arts Public BA 1,483 14,837 8,525 0.57 0.45 Northeastern University Private nonprofit BA 13,204 31,503 41,686 0.83 0.14 Quinsigamond Community College Public AA 7,647 6,510 5,094 0.45 Salem State University Public BA 7,134 15,420 8,130 0.46 0.33 University of Massachusetts-Dartmouth Public BA 7,202 17,092 11,681 0.49 0.38 Suffolk University Private nonprofit BA 5,535 27,507 31,716 0.55 0.29 Wentworth Institute of Technology Private nonprofit BA 3,975 31,201 29,200 0.63 0.28 Worcester Polytechnic Institute Private nonprofit BA 4,012 35,483 42,778 0.81 0.15 Worcester State University Public BA 5,033 14,402 8,157 0.49 0.27 Unweighted average   7,500 22,688 23,500 0.66 0.27 Panel B: Discouraged Colleges Bay State College Private for-profit AA 1,196 25,440 19,748 0.27 0.51 Becker College Private nonprofit BA 1,871 28,697 31,500 0.31 0.39 Curry College Private nonprofit BA 2,794 30,561 34,715 0.45 0.26 Fisher College Private nonprofit AA 1,733 30,797 27,575 0.39 0.58 Lasell College Private nonprofit BA 1,667 25,316 30,000 0.53 0.32 Mount Ida College Private nonprofit BA 1,261 27,772 29,377 0.40 0.39 Newbury College Private nonprofit BA 961 24,111 28,950 0.31 0.50 Nichols College Private nonprofit BA 1,260 26,673 32,370 0.47 0.35 Regis College Private nonprofit BA 1,188 37,829 34,380 0.49 0.36 Unweighted average   1,548 28,577 29,846 0.40 0.41 Notes: Panel A lists the colleges to which Bottom Line encourages students to apply. Panel B lists those colleges to which Bottom Line discourages students from applying. College characteristics are taken from the 2013 version of the IPEDS. Beneath each panel is the unweighted average of each characteristic across the given set of colleges. Averages weighted by undergraduate enrollment are very similar. Table A.2. Covariate Balance Test (1) Female(2) Low income(3) Hispanic(4) Asian(5) White(6) Other(7) ESL(8) LEP(9) Special education(10) Vocational Education Panel A: 2010—14 Cohorts Eligible (BW = 1.5) 0.027 −0.010 −0.074** 0.031 −0.017 −0.016 0.035 0.026 −0.005 0.030 (0.043) (0.032) (0.033) (0.021) (0.019) (0.014) (0.037) (0.021) (0.023) (0.020) N 4,992 Eligible (BW = 1.0) 0.025 −0.015 −0.047 0.014 −0.030 −0.021 0.023 0.005 −0.004 0.033 (0.052) (0.040) (0.041) (0.023) (0.020) (0.018) (0.049) (0.025) (0.026) (0.023) N 3,780 Control mean 0.69 0.75 0.37 0.05 0.08 0.05 0.39 0.07 0.07 0.17 Panel B: 2010—12 Cohorts Eligible (BW = 1.5) 0.016 −0.038 −0.077 0.025 0.013 −0.022 −0.031 0.033 −0.014 0.079** (0.050) (0.039) (0.047) (0.023) (0.030) (0.026) (0.039) (0.025) (0.021) (0.035) N 2,730 Eligible (BW = 1.0) 0.054 −0.056 −0.075 0.014 0.005 −0.028 −0.051 0.018 −0.001 0.093** (0.054) (0.048) (0.057) (0.028) (0.029) (0.031) (0.047) (0.028) (0.021) (0.041) N 2,100 Control mean 0.67 0.72 0.40 0.02 0.09 0.12 0.40 0.05 0.11 0.16 (1) Female(2) Low income(3) Hispanic(4) Asian(5) White(6) Other(7) ESL(8) LEP(9) Special education(10) Vocational Education Panel A: 2010—14 Cohorts Eligible (BW = 1.5) 0.027 −0.010 −0.074** 0.031 −0.017 −0.016 0.035 0.026 −0.005 0.030 (0.043) (0.032) (0.033) (0.021) (0.019) (0.014) (0.037) (0.021) (0.023) (0.020) N 4,992 Eligible (BW = 1.0) 0.025 −0.015 −0.047 0.014 −0.030 −0.021 0.023 0.005 −0.004 0.033 (0.052) (0.040) (0.041) (0.023) (0.020) (0.018) (0.049) (0.025) (0.026) (0.023) N 3,780 Control mean 0.69 0.75 0.37 0.05 0.08 0.05 0.39 0.07 0.07 0.17 Panel B: 2010—12 Cohorts Eligible (BW = 1.5) 0.016 −0.038 −0.077 0.025 0.013 −0.022 −0.031 0.033 −0.014 0.079** (0.050) (0.039) (0.047) (0.023) (0.030) (0.026) (0.039) (0.025) (0.021) (0.035) N 2,730 Eligible (BW = 1.0) 0.054 −0.056 −0.075 0.014 0.005 −0.028 −0.051 0.018 −0.001 0.093** (0.054) (0.048) (0.057) (0.028) (0.029) (0.031) (0.047) (0.028) (0.021) (0.041) N 2,100 Control mean 0.67 0.72 0.40 0.02 0.09 0.12 0.40 0.05 0.11 0.16 Notes: Robust standard errors clustered by distance from the GPA threshold are in parentheses. Coefficients come from regressions of the listed covariate on an indicator for GPA eligibility, distance from the GPA threshold, the interaction of those two, and high school class fixed effects, using a bandwidth (BW) of 1.5 and 1.0 GPA points. Panel A includes the high school classes of 2010—14, and panel B includes the classes of 2010—12. Covariates tested are all indicators, including in columns 7—10 English as a second language, limited English proficiency, special education, and vocational education status. Also listed is the mean value of each covariate for students with GPAs between 2.3 and 2.5. **p < 0.05. ## Notes 1. School counselors may also have impacts prior to high school. Carrell and Hoekstra (2014), for example, find that the random addition of a graduate student counselor intern in elementary schools improves boys’ test scores and behavior. Reback (2010) finds that additional elementary school counselors improve behavior but not test scores. 2. Bottom Line has also begun more recent operations in New York City and Chicago. 3. Bottom Line also attempts to verify some of the self-reported characteristics, including income and GPA. The chance that Bottom Line does this for a given student is related to their initial self-reports and is thus endogenous to the selection process itself. As such, we focus on the versions reported initially by all students on their applications. Though our main specifications include all students, our central results are not affected by excluding the 300 or so students who are ineligible due to income, first-generation status, or other reasons not related to GPA. 4. We define fall enrollment as having an enrollment spell that includes 1 October and, for later measures of persistence, spring enrollment as a spell that includes 1 March. 5. Nearly every student accepted into Bottom Line's program receives at least some counseling so that we do not distinguish acceptance from counseling itself. 6. Students may have been aware that Bottom Line would eventually request transcripts in part to verify their GPAs, which might discourage students from such inflation. 7. Although we can observe the earliest two cohorts through four years of college, three cohorts is the minimum we need in order to generate estimates with sufficient precision to be of interest. 8. By construction, this variable can range from zero (no four-year college enrollment at any time) to six (four-year college enrollment in all semesters observable by the end of the third year).
proofpile-shard-0030-366
{ "provenance": "003.jsonl.gz:367" }
# Sum of all Find the value of $\sum _{x=1}^{2015}\text{gcd} \left(x,2015\right),$ where $$\text{gcd}(a,b)$$ is the greatest common divisor function. × Problem Loading... Note Loading... Set Loading...
proofpile-shard-0030-367
{ "provenance": "003.jsonl.gz:368" }
# Multivariable calculus ## Homework Statement The height of a hill is given as the following: $$h \left( x,y \right) =40\, (\left( 4+{x}^{2}+3\,{y}^{2} \right) ^{-1})$$ There's a stream passes the point (1,1,5) which is on the surface of h. The stream follows the steepest descent. Find the equation of the stream. ## Homework Equations I take this is relevant to the tangent hyperplane of a surface. Tangent plane at point a is: f(a)+gradient(f(a)) dot (x-a) ## The Attempt at a Solution I think the path of the stream is the intersection of the surface h and a plane orthogonal to the tangent plane at point (1,1,5), but I'm not sure. Dick Homework Helper The tangent plane has many different orthogonal planes. You are thinking in too many dimensions. Just regard h as a function of x,y. Then the gradient of h will point in the direction of most rapid change of h. So I take the gradient of h at point (1,1), which gives me grad(a)=(-10,-30). But the path of the stream isn't going to be a straight line, is it? How do I get the equation of the stream using only the gradient and the point at which the gradient is taken? Dick Homework Helper No, it isn't going to be a straight line. Can you see how to use the gradient vector to compute the slope of steepest descent direction at an arbitrary point (x,y)? Then parametrize the solution curve as (x(t),y(t)). The slope of this solution curve is y'(t)/x'(t)=dy/dx. Equate the two slopes and solve the differential equation. Last edited: Forgive me I'm not used to differential equations. Here's what I've got: The gradient of a point (x,y) on the surface is (-80x/_blah_, -240y/_blah_). So the slope at (x,y) is -240y/-80x=-3y/x. this should be equal to dy/dx. Move sides =>dy/y=3dx/x, integrate both sides, ln(y)=ln(x)+C. Am I on the right track? Thanks. Dick Homework Helper Forgive me I'm not used to differential equations. Here's what I've got: The gradient of a point (x,y) on the surface is (-80x/_blah_, -240y/_blah_). So the slope at (x,y) is -240y/-80x=-3y/x. this should be equal to dy/dx. Move sides =>dy/y=3dx/x, integrate both sides, ln(y)=ln(x)+C. Am I on the right track? Thanks. You are very much on the right track. But what happened to the '3'? You are very much on the right track. But what happened to the '3'? Oops...that was a typo. It's supposed to be ln(y)=3ln(x)+C. However, I haven't used the fact that the stream passes through point (1,1,5) yet...Can I just plug in the point into the equation: ln(1)=3ln(1)+C, then C=0. Therefore, is the equation of the stream ln(y)=3ln(x)? Thanks. Dick Homework Helper Oops...that was a typo. It's supposed to be ln(y)=3ln(x)+C. However, I haven't used the fact that the stream passes through point (1,1,5) yet...Can I just plug in the point into the equation: ln(1)=3ln(1)+C, then C=0. Therefore, is the equation of the stream ln(y)=3ln(x)? Thanks. Yes and yes. If you exponentiate both sides you can get a simpler expression. Great! Thanks a lot man.
proofpile-shard-0030-368
{ "provenance": "003.jsonl.gz:369" }
Changes between Version 37 and Version 38 of ESGFNodeInstallation Ignore: Timestamp: Nov 13, 2013 12:50:28 PM (9 years ago) Comment: -- Legend: Unmodified v37 = Data Publishing = Now, we are going to publish a project called cordex /esg/config/esgcet/esgcet_models_table.txt Now, we are going to publish the data from a project called cordex: First, add the project name to the esgcet_models_table.txt file {{{ #!sh echo "cordex | WRF331G-v02 | http://meteo.unican.es | UNICAN WRF3.3.1 Model version, 2.0" > /esg/config/esgcet/esgcet_models_table.txt }}} Second, you have to add the project information to the esg.ini file : {{{ #!sh /esg/config/esgcet/esg.ini }}} [http://devel.esgf.org/wiki/ESGF_Data_Publishing]
proofpile-shard-0030-369
{ "provenance": "003.jsonl.gz:370" }
# How could the baseline of atmospheric neutrinos be as much as $10^7$ meters while exosphere (outer part of atmosphere) is at most 8e5 meters? Atmospheric neutrinos correspond to neutrinos produced by the interaction of cosmic rays in the Earth atmosphere. The Earth atmosphere is at most $$800$$ km=$$8 \cdot 10^5$$ meters. So how could the "baseline" (=distance of flight) of the atmospheric neutrinos be as much as $$10^7$$ meters, as stated in the Table of properties of atmospheric neutrinos in the famous (most cited book in particle physics) Particle Data Group review: http://pdg.lbl.gov/2019/reviews/rpp2019-rev-neutrino-mixing.pdf And bonus question: how could solar neutrinos have a baseline (it is stated $$10^{10}$$ meters) higher than 150 millions km= order of $$10^8$$ meters: that looks impossible? Neutrinos produced in Earth's atmosphere have two opportunities to interact with detectors on Earth's surface: once on their way down from the sky, and again when they emerge unscathed on the other side of the planet. So the baseline is more like the diameter of Earth, which is about $$12.7×10^6$$ meters. Re your bonus question: "million kilometers" is a stupid unit, and everyone makes off-by-thousand exponent-counting errors when using it. An astronomical unit is $$1.5×10^{11}$$ meters.
proofpile-shard-0030-370
{ "provenance": "003.jsonl.gz:371" }
# Asymptotics of a recursion suppose we have the following two sequences $$\alpha_k = (k-1)\left(1-\frac {1}{1+(k+1)l}\right) \quad , k \geq 2$$ $$\beta_k = (k-1)\left(1+\frac {1}{1+(k-1)l}\right) \quad , k \geq 2$$ where $l$ is a positive constant and define the sequence $c_k$ recursively by: $$c_2 = - 1/\beta_2$$ $$c_3 = 0$$ $$c_{k+1} = \frac{\alpha_{k-1}}{\beta_{k+1}}c_{k-1} \quad , k \geq 3$$ it is not hard to see that this would give $$c_2 = - 1/\beta_2$$ $$c_{2k} = -\frac{\alpha_2}{\beta_2 }\cdot\frac{\alpha_4}{\beta_4 }\cdot\cdot\cdot \frac{\alpha_{2k-2}}{\beta_{2k-2} }\cdot \frac{1} {\beta_{2k}} \quad , k \geq 2$$ $$c_{2k+1} = 0 \quad , k \geq 1$$ apparently we must have $d_k = c_{2k} \sim k^{-(1+1/l)}$ but I have no idea how to show this. can anyone shed some light on this? also a similar question for the recursion $$c_2 = - 1/\beta_2$$ $$c_3 = \frac{\gamma_2}{\beta_3}c_2$$ $$c_{k+1} = \frac{\alpha_{k-1}}{\beta_{k+1}}c_{k-1} + \frac{\gamma_k}{\beta_{k+1}}c_k \quad , k \geq 3$$ where $$\gamma_k = \sigma \frac{l(k^3-k)}{1+kl} \quad , k \geq 2$$ $\sigma$ being also a positive constant How would the asymptotics look like in this case? For the first sequence, $$d_k=\frac{\alpha_{2k-2}}{\beta_{2k}}d_{k-1}=\frac{k-3/2}{k+1/2-1/\ell}d_{k-1}$$ so one has $$d_k=C\frac{\Gamma(k-1/2)}{\Gamma(k+3/2-1/\ell)}\ ,$$ the constant $C$ being determined by the initial condition $d_1$, namely $$C=d_1\frac{\Gamma(5/2- 1/\ell)}{\Gamma(1/2)}\ .$$ Recall that $\Gamma(x+a)=x^a\Gamma(x)(1+o(1))$ as $x\to+\infty$, so $$d_k=Ck^{-1-1/\ell}(1+o(1)).$$ For the second sequence, $$c_{k+1}=\frac{(k-1)(k+1)}{k+2/\ell }c_k+\frac{k-2}{k+2/\ell}c_{k-1}$$ which implies $c_{k-1}/c_k=O(1/k)$; if we plug this in the recursion again we have $$c_{k+1}=\frac{(k-1)(k+1)}{k+2/\ell }(1+O(1/k^2))\ ,$$ whence $$c_k= A\frac{\Gamma(k+1)\Gamma(k-1)}{\Gamma(k+2/\ell) }(1+o(1))\ ,$$ because the infinite product of $1+O(1/k^2)$ is convergent. By the Stirling formula $$c_k= B k^{k-1/2+2/\ell}e^{-k} (1+o(1))$$ for a certain constant $B$. • is it just me or is the sign in the power of $k$ wrong in your answer for the first question? Jan 1, 2016 at 0:17 The first factor telescopes completely, and the second becomes easier by combining $\alpha_{2\kappa}$ with $\beta_{2\kappa+2}$, so $$c_{2k} = \frac{1+\ell}{(2k-1)(2+\ell)}\prod_{\kappa=1}^k \left(1-\frac{2}{2+(2\kappa+1)\ell}\right).$$ Now take logarithms, and apply Euler-MacLaurin summation. • If $f$ is a smooth function, then $\int f(t)$ is the first approximation to $\sum f(n)$. Euler-Maclaurin gives a sequence of correction terms, involving higher and higher derivatives of $f$. When applied to a sum of length $N$, using $k$-th derivatives usually gives an error of size $k!^C N^{-k}$, so taking $k$ too big does not pay off, but most of the time one or two terms suffice. Jan 1, 2016 at 11:09
proofpile-shard-0030-371
{ "provenance": "003.jsonl.gz:372" }
What is heat of vaporization The latent heat of vaporization for water is 539. Physics. This energy breaks down the intermolecular attractive forces, and also must …The heat of vaporization represents the amount of thermal energy necessary to convert one mole of the liquid in question into the gaseous state. Water boils at 212°F at which point it converts to a vapor we know as steam. This is the amount of energy require to change from a state of liquid to vapor. “At what temperature I should heat cannabis to decarboxylated CBDA to CBD? 80 degrees Celsius? 120 degrees? Latent Heat of Fusion and Vaporisation. but this does not explain why the heat decreases. The latent heat is written as L and given in J/kg in mks units. Experimental heat capacities for many liquids at T 5298. a substance condensing or vaporizing at a specified temperature and pressure. noun Physics. The experiment is composed of a round bottom boiling flask, a distillation condenser or multiple condensers, a heat source (a burner or a heating mantle), a vacuum gauge (an open mercury manometer however, a Bourdon type gauge will work and will eliminate the mercury hazard), and an aspirator or trapped vacuum pump. is, with less use of spark timing retard and fuel enrichment-if the engine is using a more highly knock resistant fuel. For each substance a certain specific amount of heat must be supplied to vaporize a given quantity of the substance. Hydrogen – Specific Heat, Latent Heat of Fusion, Latent Heat of Vaporization. The heat of vaporization, ΔHvap, sometimes called the enthalpy of vaporization, is the amount of energy necessary to convert a liquid into vapor at the boiling point. Heat of Fusion - Heat of Vaporization - Concept. Or any explanation whatsoever. Every substance has its own molar heat of vaporization. So the Heat of Vaporization is the same for both processes, just positive (endogonic/endothermic) for evaporation and negative (exergonic/exothermic) for condensation. The temperature of the water keeps increasing, until it has reached its boiling point at 100oC, where it remains until the heater gets turned off. That is called latent heat of vaporization. Enthalpy of Vaporization of Water. Its specific gravity is 3. The heat of vaporization is the fundamental quantity that determines the experimental conditions at which an industrial or laboratory-scale distillation should be run. The units for the molar heat of vaporization are kilojoules per mole (kJ/mol). [Last Updated: 2/22/2007] Citing this page. ? More questions. Complete and detailed technical data about the element $$ELEMENTNAME$$$in the Periodic Table. Calculate the energy required to change a given quantity of substance to gas at any given pressure, using this molar heat of vaporization calculator using Clausius Clapeyron Equation online. When the phase change is from solid to liquid we must use the latent heat of fusion, and when the phase change is from liquid to a gas, we must use the latent heat of vaporisation. That’s generally just written as 199 J/kg. Explore Dictionary. 185 calories, latent heat of vaporization 45. Temperature at which the bulk of a liquid at a give pressure converts to a vapor is the boiling point. I have set up the calculation such that the fire heat absorption, Q, and latent heat of vaporization at STP, Hvap, are used to calculate relief load, W = Q/Hvap. Heat of vaporization is the amount of heat energy required to change the state of a substance from a liquid into a vapor or gas. Why ice floats. Latent heat of vaporization – water at 16 MPa (pressure inside a pressurizer). As with the melting of a solid, the temperature of a boiling liquid remains constant and the input of energy goes into changing the state. Keep Learning. The heat of vaporization is the amount of energy required to turn a liquid into a gas at its boiling point. e. Heat of Vaporization Definition. HEATS OF VAPORIZATION 1. Thus vaporization enthalpy data are of interest at a variety of temperatures. HEATS OF FUSION AND VAPORIZATION I. The word latent means hidden. But in the case of second beaker its top is fully closed , so steam can not escape from the beaker, so pressure in the beaker will start increase. Point to the graph to see details, or click for full data on that element. A Study of Latent Heat of Vaporization in Aqueous Nanofluids by Soochan Lee A Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Approved June 2015 by the Graduate Supervisory Committee: Patrick E. heat of vaporization. For water , Vaporization is the process that occurs when a chemical or element is converted from a liquid or a solid to a gas. This one is on vaporization and the boiling points of CBD & THC. The data represent a small sub list of all available data in the Dortmund Data Bank . The phase change releases heat (exothermic) when converting gas to liquid or liquid to solid. Evaporation usually occurs on the surface. Vaporization. The vaporization process requires an increase in energy to allow the liquid particles to overcome intermolecular attractions and vaporize. For water, latent heat of vaporization is the heat required to change water to vapour at 373 K temperature, whose value is 40. B: Absorption of latent heat of fusion. Recommended for High School through College. For example, the amount of heat necessary to change one gram of water to steam at its boiling point at one atmosphere of pressure, i. The latent heat of vaporization, Lv. C: Rise in temperature as liquid water absorbs heat. ˚F) Applications: Air conditioning equipment and heat pumps. 8 Experimental gas phase heat ca-pacities for …The enthalpy of vaporization, (symbol ∆H vap) also known as the (latent) heat of vaporization or heat of evaporation, is the amount of energy that must be added to a liquid substance, to transform a quantity of that substance into a gas. The latent heat of vaporization of water is 2260 J/g. Heat of vaporization of water. DATA FOR NON-METALS. Evaporative cooling. In the case of the latent heat of fusion it is the heat required to change a substance from a solid (ice) to a liquid (water) or vice versa while the latent heat of vaporization from a liquid (water) to a gas (steam) or vice versa. The heat (enthalpy) of vaporization for water is defined as the heat per unit mass required to convert liquid water at its boiling point to gaseous water at the same temperature. The molar heat of vaporization of a substance is the heat absorbed by one mole of that substance as it is converted from a liquid to a gas. When a sample of a liquid is introduced into a container, the liquid will tend to evaporate. )Latent heat of vaporization: Heat necessary to transform 1 kg of ebullient water into vapour without change of temperature (thermal energy necessary during the change of state liquid to the state vapour). Just copy and paste the below code to your webpage where you want to display this calculator. Answers. (Or, if we were cooling off a substance, how much energy per mole to remove from a substance as it condenses. 9 grams, and the final temperature of the water, after the liquid nitrogen has vaporized, is 38. Eventually, the speed of movement of some molecules becomes so fast allowing them to overcome the intermolecular attraction, detach from the multimolecular water, form bubbles, and leave the water surface in gas state. The heat of vaporization, ( or ) is the amount of thermal energy required to convert a quantity of liquid into a vapor. Specific heat capacity and heat of vaporization of water. Furthermore, we will be exposed to the safe handling of cryogens that are routinely used in low temperature physics. It is also known as enthalpy of vaporization, with units typically given in joules (J) or calories (cal). The amount of heat required to convert 1 g of ice to 1 g of water, 80 Cal, is termed the latent heat of melting, and it is higher for water than for any other commonly occurring substance. desflurane. ), after the vaporization of$17 billion of primarily Jewish wealth, everything changed. Vaporization is the process of a liquid being converted into a gas (or vapor). propane101. The procedure in making a measurement of heat of vaporization of a fuel was as follows: The temperature of the calorimeter was adjusted to approximate equality with that of the jacket, and theThe heat of vaporization, (or ) is the amount of thermal energy required to convert a quantity of liquid into a vapor. Latent heat is associated with the change of phase of atmospheric or ocean water, vaporization, condensation, freezing or melting, whereas sensible heat is energy transferred that is evident in change of the temperature of the atmosphere or ocean, or ice, without those phase changes, though it is associated with changes of pressure and volume. The heat absorbed when a substance changes phase from liquid to gas. Just need some confirmation, please. Heat of Vaporization-the amount of heat required to convert unit mass of a liquid into the vapor without a change in temperature. Each vaporizer generates heat differently and most have a slight temp fluctuation during use. britannica. The enthalpy of vaporization, (symbol ∆Hvap) also known as the (latent) heat of vaporization or heat of evaporation, is the amount of energy (enthalpy) that must be added to a liquid substance, to transform a quantity of that substance into a gas. This transformation appears, when water is heated. Fire Sizing - Latent Heat Of Vaporization - posted in Relief Devices Forum: Hi everyone, I am developing an Excel file for relief valve sizing, and I am stuck on the fire case. Heat of vaporization. medical Definition of latent heat. -Measurement of vapour heat capacities and latent heats of vaporization of isopropyl alcohol, Trans. It can be in any quantity depending on the type of substance and it’s mass. The latent heat of sublimation, Ls. Two practical applications of heats of vaporization are distillations and vapor pressure: · Distillation is one of the most practical methods for separation and purification of chemical compounds. Complete and detailed technical data about the element $$ELEMENTNAME$$$in the Periodic Table. The amount of heat needed to raise the temperature of a liquid is called. 111 torr at 393k. The enthalpy of vaporization is a function of the pressure at which that transformation takes place. of the heat of vaporization of water, the results of which are sum­ marized in table 1. The boiling point of pure water is relatively high due to the strong attractions afforded by the hydrogen bonds compared with the other types of bonds. heat of vaporization c. Students use this For practical reasons, in this study, not only the isosteric heat, named the net isosteric heat of sorption, was calculated but also calculated the integral isosteric heat of sorption, which includes the net isosteric heat of sorption and the latent heat of vaporization of free water. As a consequence, there is a wealth of infor- mation in the literature that covers measurements over a broad range of temperatures. As an example, see the figure, which descibes phase transitions of water. There is no temperature change during a phase change, thus there is no change in the kinetic energy of the particles in the material. Answer in units of atm Solution: 1) Let us use the Clausius-Clapeyron Equation: Vapor Pressure and Heat of Vaporization. Is melting endothermic or exothermic? Explain. Latent Heat of Fusion of Magnesium is 8. Vaporization (vaporisation) can be described as a phase transition where a substance changes its phase from liquid to gas. These energies are needed to break apart the intermolecular forces holding the solid or liquid together as it enters a less dense state of matter. 0oC. At what Kelvin temperature will the vapor pressure be 7. The heat of vaporization of water is about 2,260 kJ/kg, which is equal to 40. The enthalpy of vaporization, (symbol ∆H vap) also known as the (latent) heat of vaporization or heat of evaporation, is the amount of energy that must be added to a liquid substance, to transform a quantity of that substance into a gas. The heat of vaporization is affected by the type and quantity of the The molar heat of vaporization is the energy needed to vaporize one mole of a liquid. The heat is called latent because it does not heat up the liquid. Vaporization is a transitional phase of an element or compound from a solid phase or liquid phase to a gas phase. 00 times higher than it was at 329 K? Latent heat of fusion and vaporization Latent heat is the energy change associated with the phase change of a material between gas, liquid and solid. High Heat of Vaporization and Boiling Point. 1 °C and its molar heat of vaporization is 30. com. 3oC, the mass of liquid nitrogen added to the water is 61. A similar process occurs when a liquid changes to a gas. Advance Study Assignment 1. Only for A certain substance has a heat of vaporization of 66. Dr. MP A Level Thermal Physics (A Level) Specific Heat Capacity And Specific Latent Heat of Fusion/Vaporization Specific Heat Capacity And Specific Latent Heat of Fusion/Vaporization Show/Hide Sub-topics (A Level) Part 10. Currently, the knock The Heat Capacity and Entropy of Nitrogen. For water, l_{\rm vaporization} = 540 {\rm\ cal\ g}^{-1}. Examples from the Web for . Jun 09, 2016 · The heat of vaporization diminishes with increasing temperature and it vanishes completely at the critical temperature (Tr=1) because above the critical temperature the liquid and vapor phases no longer co-exist. L vaporization = 2. org and *. what is heat of vaporizationThe enthalpy of vaporization, (symbol ∆Hvap) also known as the (latent) heat of vaporization or heat of evaporation, is the amount of energy (enthalpy) that must Apr 3, 2018 The Heat (or Enthalpy) of Vaporization is the quantity of heat that must be absorbed if a certain quantity of liquid is vaporized at a constant The heat of vaporization of water is the highest known. That is, water has a high heat of vaporization, Related Questions More Answers Below. , 1963, 59, 1544. 3604 kJ/mol. heat of vaporization Definition of heat of vaporization : heat absorbed when a liquid vaporizes specifically : the quantity of heat required at a specified temperature to convert unit mass of liquid into vapor Latent heat can be understood as heat energy in hidden form which is supplied or extracted to change the state of a substance without changing its temperature. Background. Heat of vaporization definition, the heat absorbed per unit mass of a given material at its boiling point that completely converts the material to a gas at the same The (latent) heat of vaporization (∆Hvap) also known as the enthalpy of vaporization or evaporation, is the amount of energy (enthalpy) that must be added to a Heat of fusion is the energy needed for one gram of a solid to melt without any change in temperature. com/links/En This short video takes demonstrates how to use the heat of vaporization equation when solving heat problems. The latent heat of vaporization δH corresponds to the amount of energy that must be supplied to the system to convert a unit amount of substance from the liquid to the vapor phase under conditions of equilibrium between the two phases. In the case of the latent heat of fusion it is the heat required to change a substance from a solid (ice) to a liquid (water) or vice versa while the latent heat of vaporization from a liquid (water) to a gas (steam) or vice versa. The procedure in making a measurement of heat of vaporization of a fuel was as follows: The temperature of the calorimeter was adjusted to approximate equality with that of the jacket, and theVaporization depends on the mass (M) and the latent heat (L), or heat per mass needed for a phase change at a constant temperature. Is vaporization endothermic or exothermic? Explain. No expensive or special equipment is required in order to obtain relatively accurate results. The heat of vaporization of each individual compound is approximately related to the boiling point by an empirical value called Trouton's constant. Units: energy per unit mass. Heat of Vaporization and Condensation. heat of vaporisation; heat of vaporization. Specific heat capacity and heat of vaporization of water. The heat of vaporization is affected by the type and quantity of the What Is the Formula for Heat of Vaporization? The heat of vaporization formula is q = m x ?Hv, where "q" is heat energy, "m" is mass and "?Hv" is heat of vaporization. wordhippo. For water at its normal boiling point of 100 ºC, the heat of vaporization is 2260 J g -1 . Project CBD answers questions from our readers. Write the formula used for calculating the heat of vaporization of a liquid. Heat Measurement; specific heat, heat of fusion, of vaporization, combustion. The latent heat of vaporization is defined as the heat necessary to change the state of a substance from liquid to gas without a change in temperature. The final expression for the experimental heat of fusion is. The heat of vaporization is the fundamental quantity that determines the experimental conditions at which an industrial or laboratory-scale The molar heat of vaporization of a liquid is the quantity of heat that needs to be absorbed to vaporize one mole of liquid at a given temperature. The energy required to change the phase of a substance is known as a latent heat. The temperature of a boiling liquid remains constant until all of the liquid has been converted to a gas. · The concentration of a gas is given by its vapor pressure. This flux is the arithmetic sum of short-wave radiant flux, net long-wave radiant flux, sensible heat flux, vaporization latent heat flux, and advective heat flux. Just like with melting, it takes energy to change a liquid into a gas, but energy is Heat of vaporization of water and ethanol. Latent heat of vaporization . • The amount of heat needed to vaporize one mole of a given liquid is known as the molar heat of vaporization (Δ H vap ). 18828 (r), latent heat of fusion 16. The units are usually kilojoules per mole, or kJ/mol. How is latent heat of vaporization measured? kilojoules (kj) Boiling point. Jul 10, 2012 · link-http://www. Heat Transfer by Vaporization If part of a liquid evaporates, it cools the liquid remaining behind because it must extract the necessary heat of vaporization from that liquid in order to make the phase change to the gaseous state. And latent heat of vaporization takes place during boiling or condensing. Write down all available information upon the problem. It is usually reported in kJ/mol. It vanishes completely at a certain point called the critical point. The energy required to change a substance from a liquid state to a vapor state is commonly termed the heat of vaporization. The heat of vaporization, ( Hv or Hvap) is the amount of thermal energy required to convert a quantity of liquid into a vapor. The amount of heat required to vaporize one gram of a liquid at its boiling point with no change in temperature. Oct 02, 2013 · The Heat of Vaporization (also called the Enthalpy of Vaporization) is the heat required to induce this phase change. B. This amount of heat is called heat of vaporization. This amount of heat is known as the latent heat latent heat,Heat Measurement; specific heat, heat of fusion, of vaporization, combustion. The molar heat of vaporization, #DeltaH_"vap"#, sometimes called the molar enthalpy of vaporization, tells you how much energy is needed in order to boil #1# mole of a given substance at its boiling point. Specific heat of Hydrogen is 14. USP Technologies is a leading provider of hydrogen peroxide and peroxide based, performance-driven, full-service environmental treatment programs to help purify water, wastewater, soil and air. Heat of Vaporization. Specific Heat and Heat of Vaporization Quiz. As a result of an increase in temperature, the kinetic energy of the molecules increases. Heat of Vaporization and Condensation. To calculate the molar heat of vaporization, write down your given information, Heat of Vaporization-the amount of heat required to convert unit mass of a liquid into the vapor without a change in temperature. php?book=72114Vaporization • Vaporization of a liquid, through boiling or evaporation, cools the environment around the liquid as heat flows from the surroundings to the liquid. 5 mm Hg, and the vapor pressure ofthe heat capacity of the gas phase is usually smaller than that of the liquid phase ~l!, vaporization enthalpies increase with decreasing temperature. In case of liquid to gas phase change, this amount of energy is known as the enthalpy of vaporization, (symbol ∆H vap; unit: J) also known as the (latent) heat of vaporization or heat of evaporation. There are two types of vaporization: evaporation and boiling. It can also refer to the physical destruction of an object due to intense heat. The amount of heat needed to convert a unit mass of a substance from its liquid phase to its vapor phase at a constant temperature. ofthelatentheatofvaporization. Latent heat of vaporisation describes the amount of heat that must be added to an amount of liquid at its boiling temperature in order to solely and completely convert the liquid to its vapour form. This procedure has been performed in our freshmen chemistry laboratories for many years with excellent results. Vaporization depends on the mass (M) and the latent heat (L), or heat per mass needed for a phase change at a constant temperature. This flux is the arithmetic sum of short-wave radiant flux, net long-wave radiant flux, sensible heat flux, vaporization latent heat flux, and advective heat flux. : heat given off or absorbed in a process (as fusion or vaporization) other than a change of temperature. The only thing that changes is what column of the table you look at to obtain the number for heat of The heat of vaporization formula is q = m x ?Hv, where "q" is heat energy, "m" is mass and "?Hv" is heat of vaporization. This represents the energy needed to heat a liquid into a gaseous state. Latent heat of fusion takes place during melting or freezing, and latent heat of vaporization takes place during boiling or condensing. heat capacity d. The only thing can be cosidered is the total amount of heat added to water to vaporize at what atmospheric pressu … re. Magnesium – Specific Heat, Latent Heat of Fusion, Latent Heat of Vaporization. The heat of vaporization is defined as the amount of heat needed to turn one gram of a liquid into a vapor, Heat of vaporization definition is - heat absorbed when a liquid vaporizes; specifically : the quantity of heat required at a specified temperature to convert unit Specific heat capacity and heat of vaporization of water. Heat of vaporization is the energy needed for one gram of a liquid to vaporize (boil) without a change in pressure. For problems 8 - 10 you will need to use the heat of fusion ( H fus) , specific heat, or the heat of vaporization ( H vap) in combinations with one another. Heat of Vaporization Experiment. . Periodic Table of Elements - Sorted by Heat of Vaporization. Q & A: boiling, pressure, latent heat. Some of these early data for ammonia and steam, most notably the heat capacity and heat of vaporization data, still are considered to be among the very best available. It is also often referred to as the latent heat of vaporization The heat of vaporization of each individual compound is approximately related to the boiling point by an empirical value called Trouton's constant. Also called enthalpy of vaporization, is the amount of energy that must be added to a substance in liquid phase for it to make a transition to the gas phase. Since the heat of vaporization is calculated as the energy required by the system so that the liquid with a given composition vaporizes and transfers to the vapor phase, this energy calculation is not constant during the relief event because the liquid composition varies during this process. heat of vaporization - heat absorbed by a unit mass of a material at its boiling point in order to convert the material into a gas at the same temperature. Click on the activities below to review specific heat and heat of vaporization. If conditions allow the formation of vapour bubbles within a liquid, the vaporization process is called boiling. Vaporization at the boiling point is known simply as boiling. ) Specific Heat Liquid (70 ˚F, BTU/lb. The rate of vaporization increases as the temperature increases. The latent heat associated with melting a solid or freezing a liquid is called the heat of fusion; that associated with vaporizing a liquid or a solid or condensing a vapour is called the heat of vaporization. NOTICE that whether you are using heat of fusion or heat of vaporization the equation is the same. The energy required to change a gram of a liquid into the gaseous state at the boiling point is called the "heat of vaporization". The latent heat of vaporization of mercury was found by Marignac to be 103 to 106. In short, latent heat is defined as the amount of heat that changes the state of the material without changing its temperature. The vaporization is the opposite process of condensation. The heat absorbed when a substance changes phase from liquid to gas. Molecules will escape from the relative confinement of the liquid state into the gaseous state. Heat of fusion’s counterparts are heat of vaporization, or the heat energy required to turn liquid into vapor, and heat of sublimation, or the heat energy required to turn solid directly to gas. , the heat of vaporization of water, is approximately 540 calories. : heat absorbed when a liquid vaporizes specifically : the quantity of heat required at a specified temperature to convert unit mass of liquid into vapor. The heat of vaporization can apply at much lower temperatures than the BP of water at atmospheric pressure. See also: Heat Capacity, Latent Heat of Fusion2 EFFECTS OF HEAT OF VAPORIzATION ANd OCTANE SENSITIVITy 2018 A International ord Motor Company ational Renewable nerg aboratory. 0 degrees. heat absorbance e. Another property that affects the value of the D H vap is the molecular weight or size of the molecule. (or heat of evaporation, latent heat of vaporization), the amount of heat that must be supplied to a substance in an equilibrium constant-pressure and constant-temperature process to convert the substance from the liquid state to the gaseous state. kastatic. It can be thought of as the energy required …Nitrogen – Specific Heat, Latent Heat of Fusion, Latent Heat of Vaporization. Biology-online is a completely free and open Biology dictionary with over 60,000 biology terms. Heat of vaporization is the energy needed for one gram of Heat of vaporization: carbon group element: Crystal structure: …from solid to gas), and vaporization (change from liquid to gas) among these four elements, with Apr 3, 2018 The Heat (or Enthalpy) of Vaporization is the quantity of heat that must be absorbed if a certain quantity of liquid is vaporized at a constant Definition of heat of vaporization. As long as this amount of heat (H) is continually added to the liquid at the boiling point, a hybrid state of liquid and gas will exist until only the gas state remains. Vapor Pressure and Heat of Vaporization Investigations. Vaporization can be defined as the process in which liquid state changes into the vapor state. CHEMISTRY heat of vaporization? Heat of vaporization? Answer Questions. Evaporation requires energy, The heat of Vaporization is the energy needed to change liquid to gas. com//heat_of_vaporization. The heat of vaporization of a liquid is a useful thermodynamic quantity because it allows the calculation of the …The molar heat of vaporization is an important part of energy calculations since it tells you how much energy is needed to boil each mole of substance on hand. A substance with a high heat of evaporation takes longer to transform between the two states. Heat of vaporization definition, the heat absorbed per unit mass of a given material at its boiling point that completely converts the material to a gas at the same The (latent) heat of vaporization (∆Hvap) also known as the enthalpy of vaporization or evaporation, is the amount of energy (enthalpy) that must be added to a Also known as enthalpy of vaporization, the heat of vaporization (∆Hvap) is defined by the amount of enthalpy (heat energy) that is required to transform a liquid Heat of vaporization: carbon group element: Crystal structure: …from solid to gas), and vaporization (change from liquid to gas) among these four elements, with Heat of fusion is the energy needed for one gram of a solid to melt without any change in temperature. The latent heat is calculated at constant pressure (isobaric process) , and the vaporization of a pure component occurs at constant temperature (isothermal process ) and ,of course, constant composition. org are unblocked. That's why they are very different concerning relative humidity. 5 grams, and the final temperature of the water, after the liquid nitrogen has vaporized, is …Status: ResolvedAnswers: 3Propane Gas Vaporizationwww. Latent heat is the amount of heat added to or removed from a substance to produce a change in phase. Also, the latent heat of vaporization in water is high due to the strong hydrogen bonds. 304 J/g K. This heat is known as the Latent heat. Other substances require other amounts. That’s very general for evaporation processes, because the vapor occupies more volume than the condensed phase (solid or liquid). When the temperature of a system of dry air and water vapor is lowered to the dewpoint and water vapor condenses, the enthalpy released by the vapor heats the air–vapor– liquid system, reducing or eliminating the rate of temperature reduction. The molar heat of vaporization is the energy needed to vaporize one mole of a liquid. 15K are available. 8 kJ of heat is given to 1 mole of water at 373 K temperature, it gets converted in vapours at 373 K temperature. -Measurement of vapour heat capacities and latent heats of vaporization of isopropyl alcohol, Trans. org/view_print. Thus, a transformation from one state to other state occurs. In solids, the molecules are very close together and the …Vaporization can be defined as the process in which liquid state changes into the vapor state. Vaporization requires breaking the intermolecular bonds. The heat of vaporization of each individual compound is approximately related to the boiling point by an empirical value called Trouton's constant. SEE ALL. Part 10. The heat of vaporization diminishes with increasing temperature and it vanishes completely at the critical temperature (Tr=1) because above the critical temperature the liquid and vapor phases no longer co-exist. It can be thought of as the energy required to break the intermolecular bonds within the liquid. The forward process has some energy requirement, called the "heat of vaporization", it's the deltaH of the endothermic process. Vaporization of an element or compound is a phase transition from the liquid phase to gas phase. Now it happens that the heat capacity of liquid water is well known, and the heat capacity of the vapor is easy to calculate. ; Svoboda, V. [ all data ] Williamson and Harrison, 1957Vaporization. The vape’s oven chamber will heat up the dry herbs which will release a vapor which can then be inhaled. Chemistry Dictionary. Usually expressed in J/g. 68 Heat of Vaporization of Methanol. How many grams of water at 100C can be converted to steam by 226000 J of energy? Home; The latent heat of vaporization of water is 2260 J/g. Remember: Heat energy can be “spent” on only one job at a time. Latent Heat Of Vaporization And Fusion. Vaporization vs Evaporation. Specific Heat. The molar heat of vaporization is the amount of heat required to vaporize one mole of liquid at its boiling point with no change in temperature and usually expressed ion kJ/mol. Humidification requirements in economizer-type HVAC systems High latent heat of vaporization which increases the amount of heat absorbed by the ethanol and as a consequence of slow rate of burn. heat of vaporization, heat of vaporisation (noun) heat absorbed by a unit mass of a material at its boiling point in order to convert the material into a gas at the same temperature Numerology LATENT HEAT OF VAPORIZATION. Latent heat = energy required to change the state (gas, liquid, solid) of a unit mass of material. That means water is very good at 'evaporative cooling', which is what happens when we sweat. Specific heat of Magnesium is 1. . Use the values for H fus , specific heat, or H vap for water listed earlier in the quiz. The heat applied to effect a change of state at the boiling point is the latent heat of vaporization. Here's a Calculator to Help You. As propane boils, it is in the process of vaporizing. 05868 kJ/mol. 8,9 In addition, vaporiza- Turning up the heat makes it boil faster (increased rate of vaporization), turning the heat down makes it boil slower. The calculation of the heat of vaporization of pure components (latent heat) is very straightforward. The enthalpy (or heat) of vaporization of water is tabulated as a function of temperature in the following table 1 and also represented graphically. Helmenstine holds a Ph. Related Questions More Answers Below. As a result, they escape into the surrounding in the form of vapors. The constant is actually equal to the entropy change for the vaporization process and is most often a measure of the entropy in the liquid state. 442torr at 433k. go forward), so the energy requirement (heat of vap) must be lower. Figure $$\PageIndex{1}$$: Heat imparts energy into the system to overcome the intermolecular interactions that hold the liquid together to generate vapor. 0 g of ice at 0 ºC. The heat of vaporization is a special case of the heat of a firstorder transition. Feb 20, 2019 · The heat of vaporization, ΔH vap, sometimes called the enthalpy of vaporization, is the amount of energy necessary to convert a liquid into vapor at the boiling point. Note-The Heat of Fusion equation is used only at the melting/freezing transition, Water boils and absorbs latent heat of vaporization. Due to the pressure exerted by the steam over the water surface, the remaining water molecule cannot escape or cannot converted in to steam ( Reason Heat of Fusion of Water (H f = 334 J /g) q= m H f. a. Also, every motorist has had impressed upon him the fact that heat aids in the vaporization of gasoline. The heat (enthalpy) of vaporization for water is defined as the heat per unit mass required to convert liquid water at its boiling point to gaseous water at the same temperature. Pronunciation (US): heat of sublimation (heat absorbed by a unit mass of material when it changes from a solid to a gaseous state) heat of vaporisation; heat of vaporization (heat absorbed by a unit mass of a material at its boiling point in order to convert the material into a gas at the same temperature) Heat of Vaporization (bp, BTU/lb. Latent Heat. kentchemistry. The letter Q represents heat energy (with units of J or cal), the letter m represents mass (with units of g), the symbol Δ H represents specific heat capacity (with units of J/g C or cal/g C). 5 cal/g = 333 kJ/kg Specific heat of water = 1 cal/g = 4. It is the conversion of a liquid to its vapor; it is the reverse of the process of condensation. Also known as enthalpy of vaporization, the heat of vaporization (∆H vap) is defined by the amount of enthalpy (heat energy) that is required to transform a liquid substance into a gas or vapor. the heat absorbed per unit mass of a given material at its boiling point that completely converts the material to a gas at the same temperature: equal to the heat …In summary, the heat of vaporization of a substance is the heat or energy per unit mass needed to convert a liquid to gas. Status: ResolvedAnswers: 2How to use heat of vaporization in a sentence - WordHippohttps://www. 19 kJ/kg The Heat or Enthalpy of Vaporization is the amount of heat that must definitely be absorbed in case a certain quantity of liquid is actually vaporized with a constant temperature. What’s The Difference Between Socialism And Communism? The Most Searched Words Of 2018 On Dictionary. The normal boiling point of benzene is 80. For liquid nitrogen, it requires 199 J of energy per kg of liquid nitrogen for vaporization. Vaporization, or evaporation, is the process in which a liquid is heated until it becomes a gas. The heat of vaporization is defined as the amount of heat needed to turn 1g of a liquid into a vapor, without a rise in the temperature of the liquid. The energy released comes from the potential energy stored in the bonds between the particles. About USP Technologies. 68 moles of the liquid at its boiling point? The units of the heat of vaporization indicate the process! Heat of vaporization = Energy on kilojoules ÷ moles Heat of vaporization = 3425 ÷ 2. Heat of Vaporization of Ethanol The experimental data shown in these pages are freely available and have been published already in the DDB Explorer Edition . This topic is usually explored in thermodynamics undergraduate courses. Status: ResolvedAnswers: 2Heat of Vaporization and Condensation - CASTbookbuilder. This is the heat per kilogram needed to make the change between the solid and gas phases, as when dry ice evaporates. This is the curve of vapour pressure, or the vaporization curve of water. The (latent) heat of vaporization (∆H vap) also known as the enthalpy of vaporization or evaporation, is the amount of energy (enthalpy) that must be added to a liquid substance, to transform a given quantity of the substance into a gas. Which agent has an increased latent heat of vaporization. Phelan, Co-Chair Carole-Jean Wu, Co-Chair Robert Wang Liping Wang Robert A. Here, the energy supplied is …Latent Heat of Vaporization. In this article, we will be discussing the latent heat of vaporization and fusion and their characteristics in detail. The latent heat of fusion and vaporization both involve the heat required to change the state of a substance without a change in temperature. Heat of vaporization is the energy needed for one gram of The enthalpy of vaporization, (symbol ∆H vap) also known as the (latent) heat of vaporization or heat of evaporation, is the amount of energy that must be added to a liquid substance, to transform a quantity of that substance into a gas. Evaporation is a phase transition from the liquid phase to gas phase that occurs at temperatures below the boiling temperature at a given pressure. If you need to cite this page, you can copy this text: Kenneth Barbalace. c + b′, could also be derived for vaporization enthalpies at T) T m, where m′and b′are appropriate constants. 230 torr at 413k. Enthalpy values correspond to a nominal pressure of 1 atmosphere. For water at its normal boiling point of 100 ºC, the heat of vaporization …Vaporization, conversion of a substance from the liquid or solid phase into the gaseous (vapour) phase. ma], is the sum of the heat content of moist air plus the latent heat of vaporization [h. See also: Heat Capacity, Latent Heat of Fusion[Last Updated: 2/22/2007] Citing this page. Latent heat of vaporization of fluids - alcohol, ether, nitrogen, water and more Sponsored Links The input energy required to change the state from liquid to vapor at constant temperature is called the latent heat of vaporization . Examples are latent heat of fusion and latent heat of vaporization involved in phase changes, i. heat resistance. [ all data ] Williamson and Harrison, 1957 The heat of freezing is the amount of thermal energy given off as a liquid freezes, and the heat of vaporization is the amount of thermal energy that must be added to change a liquid to a gas. Some of these early data for ammonia and steam, most notably the heat capacity and heat of vaporization data, still are …Heat of Vaporization Experiment. 1. Heat of Vaporization Formula When a liquid substance turns into a gas it is known as vaporization. Due to the increases in kinetic energy, the force of attraction between the molecules reduces. 6C, the mass of liquid nitrogen added to the water is 62. Viewers learn where the heat goes when phase changes take place with a presentation that explains the latent heat of phase changes, or, more specifically, the molar heat of fusion, solidification, vaporization, and condensation. This is chiefly due to the fact that in the former the heat of vaporization acquired in the refrigerator is rejected in the absorber, so that the whole heat of vaporization has to be supplied again by the HW#5 Heat of Fusion and Vaporization . Evaporation occurs at temperatures lower than the boiling point of the material. Vapor Pressures of Solid and Liquid. If you're behind a web filter, please make sure that the domains *. 1071. Physical characteristics of water specific heat - Dynamic viscosity the operating pressure must be obligatorily higher than the vaporization pressure An experiment to measure the latent heat of vaporization of liquid nitrogen and the average heat capacities at constant pressure of several materials in the temperature range 77-295 K is described. " From the definition latent heat, I know that it is the amount of heat neccessary to vaporize 1 unit mass at the same temp. We will learn about the thermal properties of materials including solids, liquids and gases. 6,7 Vaporization enthalpies at T) 298. This value is a function of the pressure at which the transformation occurs. Ethanol and other high heat of vaporization (HoV) fuels result in substantial cooling of the fresh charge, especially in direct injection (DI) engines. Not All Bad: 7 Ways “Bad” Can Be Good. 954 kJ/mol. For example the heat water at 100 C absorbs going to steam at 100 C Latent heat of vaporization , usually called latent heat of fusion, is that quantity of heat released during condensation. Will the temperature of a liquid rise above its boiling point? No, energy will transform liquid to gas. Thank you for your feedback! Your feedback is private. The heat of vaporization of water affects us a lot, because water's heat of vaporization is very high. The main difference between evaporation and vaporization is that Evaporation refers to a specific type of vaporization which occurs at temperatures below the boiling point of a liquid. (answer) Heat of vaporization is the quantity of heat energy in joules needed to vaporize one gram of liquid at its boiling point. Heat Measurement; specific heat, heat of fusion, of vaporization, combustion. 7 kJ/mol. The heat of vaporization of a liquid is a useful thermodynamic quantity because it allows the calculation of the …Background: Photoselective vaporization of the prostate is a technique that is widely used for the treatment of benign prostatic hyperplasia (BPH) and has pronounced advantages compared to the traditional transurethral resection of the prostate. : heat absorbed when a liquid vaporizes; specifically : the quantity of heat required at a specified temperature to convert unit mass of liquid into vapor. If the heat of vaporization was read directly from the properties of the process simulator, a much different result would be obtained because that built-in property is not a differential heat of vaporization but rather a steady-state calculation for a once-through vaporization process as the calculation below shows: Majer, V. Use the value of deltaHvap to calculate vapor pressure of benzaldehyde at room temperature (23 degrees celcius) 50 torr at 373k. Heat must be supplied to a solid or liquid to effect vaporization. 15 K differ from those at T) T m by differences in the heat capacities of the liquid and gas phases, and both properties are known to be modeled by group additivity. Compare latent heat. The effect of charge cooling combined with the inherent high chemical octane of ethanol make it a very knock resistant fuel. The Heat of Vaporization (also called the Enthalpy of Vaporization) is the heat required to induce this phase change. The procedure in making a measurement of heat of vaporization of a fuel was as follows: The temperature of the calorimeter was adjusted to approximate equality with that of the jacket, and theLatent heat of fusion is the amount of heat that a solid substance requires to change its phase from solid phase to liquid phase at a constant temperature while Latent heat of vaporization is the amount of heat that a liquid substance requires to change its phase from the …Latent heat of fusion is the amount of heat that a solid substance requires to change its phase from solid phase to liquid phase at a constant temperature while Latent heat of vaporization is the amount of heat that a liquid substance requires to change its phase from the …The heat of vaporization is the fundamental quantity that determines the experimental conditions at which an industrial or laboratory-scale distillation should be run. Usually, a large quantity of thermal energy is needed for vaporization. Inside a solution with both a vaporized and also liquidized states, the kinetic energy with the vapor is greater than the kinetic energy from the liquid. The two forms of latent heat are latent heat of fusion and latent heat of vaporization. The heat of vaporization is the fundamental quantity that determines the experimental conditions at which an industrial or laboratory-scale What is the heat of fusion and heat of vaporization? In the previous section of specific heat capacity we only discussed and did calculations for how energy affects subst ances within the same state (gas, liquid, solid) to change their temperature. If you’re asking for specific heat of vaporization then it is defined as amount The heat of vaporization is defined as the amount of heat needed to turn one gram of a liquid into a vapor, without a rise in the temperature of the liquid. Heat absorbed or released as the result of a phase change is called latent heat. It is measured in Joules per mole (J/mol), or sometimes in Calories (C). Direct conversion from solid to vapour is called sublimation. Willis Eugene Tower. Sep 03, 2015 · Molar heat of vaporization of liquid nitrogen? I really need help!? Calculate the molar heat of vaporization of liquid nitrogen if the final temp of the water is 41. enthalpy plus entropy. The heat is either absorbed or released when a body undergoes a constant-temperature process. The (latent) heat of vaporization (∆H vap) also known as the enthalpy of vaporization or evaporation, is the amount of energy (enthalpy) that must be added to a liquid substance, to transform a given quantity of the substance into a gas. comViews: 75KHeat of Vaporization Experimenthttps://www2. Feb 10, 2015 · In this video I will explain the concept of latent heats of fusion and vaporization and work out some problems involving heat of fusion and heat of vaporization of water. Study & formulation of Chabahar Bay air-sea fluxes due to heat budget process. For a given substance, the heat of vaporization may be determined per unit mass or per mole. heat of transformation, latent heat - heat absorbed or radiated during a change of phase at a constant temperature and pressure. Latent Heat Table Here is the Latent Heat table which shows the latent heat of vaporization and change of phase temperatures for some of the common fluids and gases. Re: heat of vaporization cay_m wrote: In one experiment, the mass of water is 110 grams, the initial temperature of the water is 64. The energy required to change the substance from a liquid (or solid for sublimation) to vapor is directly related to overcoming the intermolecular bonding force in the liquid state. In order to determine the molar heat of vaporization, the following information will be required: The heat of vaporization is the amount of energy that is required to convert a substance from liquid to gaseous state without changing its temperature. Heat of vaporization is the amount of thermal energy required to change the state of a substance from liquid to gaseous or vice versa. D. So if the vapor pressure is higher (lots of vapor), then it means that it's easier to vaporize the liquid (i. This vaporisation heat is usually a number of times greater than the heat required to raise the Latent Heat of Vaporization of Liquid Nitrogen and Specific Heats of Metals. Cancel reply. 02 J/g K. In summary, the heat of vaporization of a substance is the heat or energy per unit mass needed to convert a liquid to gas. heat of vaporization noun Physics . Make sense? This leads me to why you need to be able to control your heat input; you need to be able to control the rate at which the vapors are traveling up your column or into your still head. Heat Capacity; Heats of Fusion, Vaporization, and Tran­ heat capacity, heaLs of fusion, vaporization, and condensed phase, and l the heat of vaporization of unit mass of sample at Tm. 4k Views. This is the definition of molar enthalpy of vaporization. Two possible equations can help you determine the molar heat of vaporization. edu/Academics/Faculty/delbers/Heat ofHeat of Vaporization Experiment . The data covered by this com- pendium include as much of the spectrum as was available to us. This is So the Heat of Vaporization is the same for both processes, just positive (endogonic/endothermic) for evaporation and negative (exergonic/exothermic) for condensation. 04 J/g K. The latent heat of vaporization is the amount of heat energy that has to be added to a liquid at the boiling point to vaporize it. Hypernyms ("heat of vaporization" is a kind of): heat of transformation; latent heat (heat absorbed or radiated during a change of phase at a constant temperature and pressure)Feb 24, 2013 · Chemistry help: heat vaporization of liquid nitrogen? In one experiment, the mass of water is 107 grams, the initial temperature of the water is 66. 8 kJ/mol. Author: LeBron James,Sal Khanlatent heat | Definition, Examples, & Facts | Britannica. This is Author: kentchemistry. cast. Theory: The latent heat of vaporization is the amount of energy that is required to transfer a kg of liquid from the liquid state to the gaseous state. comhttps://www. Liquids can also change to gases at temperatures below their boiling points. (answer) Heat of vaporization = q/m Go to Top Go to Teachers IndexComplete and detailed technical data about the element $$ELEMENTNAME$$$ in the Periodic Table. This energy is independent of any component resulting from a rise in temperature. This example problem demonstrates how to calculate the amount of energy required to turn a sample of water to steam. )the heat capacity of the gas phase is usually smaller than that of the liquid phase ~l!, vaporization enthalpies increase with decreasing temperature. it may change the latent heat of vaporization. 2 EFFECTS OF HEAT OF VAPORIzATION ANd OCTANE SENSITIVITy 2018 A International ord Motor Company ational Renewable nerg aboratory. amount of energy necessary to liberate one mole of liquid at its boiling point into the gas phase HEAT is required to continuously vaporize a volatile agent. Due to the pressure exerted by the steam over the water surface, the remaining water molecule cannot escape or cannot converted in to steam ( Reason heat of vaporisation; heat of vaporization. When molecules of water absorb heat energy, they move fast in water. in biomedical sciences and is a science writer, educator, and consultant. This transition thus always occurs at constant temperature and Background. This amount of heat is known as the latent heat latent heat,Background: Photoselective vaporization of the prostate is a technique that is widely used for the treatment of benign prostatic hyperplasia (BPH) and has pronounced advantages compared to the traditional transurethral resection of the prostate. For example, an analysis of several studies suggests that an HEAT OF VAPORIZATION OF METALS. 77 kJ/mol. Author: Chem AcademyViews: 44KHeat of Vaporization and Condensation - CASTbookbuilder. The vapor pressure of acetone at 20oC is 185. Calculate the amount of heat needed to melt 35. com/science/latent-heatLatent heat, energy absorbed or released by a substance during a change in its physical state (phase) that occurs without changing its temperature. Think of converting water (a liquid) of to steam (water vapor). Above the critical point, the liquid and vapor phases are indistinguishable, and the substance is called a supercritical fluid. heat of vaporisation. The vaporization of a substance below its normal boiling-point can also be effected by blowing in steam or some other vapour; this operation is termed "distillation with steam. Heat of vaporization is the energy needed for one gram of a liquid to vaporize (boil) without a change in pressure. htmlThe heat of vaporization of each individual compound is approximately related to the boiling point by an empirical value called Trouton's constant. Either it will cause a change in temperature or change of state. Transpiration is a form of evaporation vital to trees and other plants. Trouton's constant is the ratio of the enthalpy (heat) of vaporization of a substance to its boiling point (in K). What is the heat of vaporization (kJ/mol) if it takes 3,452 J of heat to completely vaporize 2. Which means when 40. Specific heat of Nitrogen is 1. Some of these early data for ammonia and steam, most notably the heat capacity and heat of vaporization data, still are …Dave Cushman in England says: “Some are concerned about Oxalic acid vaporisation producing a great deal of toxic vapour, but the point of the treatment is that the oxalic acid re-condenses within the hive very rapidly and coats everything in sight. Molar Heat (or Enthalpy) of Vaporization Calculator. 15 percent. specific heat b. Two practical applications of heats of vaporization are distillations and vapor pressure: Distillation is one of the most practical methods for separation and purification of chemical compounds. The same amount of heat is liberated when the vapor condenses into a liquid. If a process involves changing temperature and changing phases, the heats required for each process must be added together to get the total heat required. Taylor Ravi PrasherThe latent heat of vaporization is the water vapor specific enthalpy minus the liquid water specific enthalpy. This is the heat per kilogram needed to make the change between the liquid and gas phases, as when water boils or when steam condenses into water. , Enthalpies of Vaporization of Organic Compounds: A Critical , The heat capacities of ethyl and hexyl alcohols from 16°K to 298°K The enthalpy of vaporization, (symbol ∆Hvap) also known as the (latent) heat of vaporization or heat of evaporation, is the amount of energy (enthalpy) that must be added to a liquid substance, to transform a quantity of that substance into a gas. Taylor Ravi PrasherThe latent heat of vaporization is the amount of energy that is required to transfer a kg of liquid from the liquid state to the gaseous state. Vaporization • Vaporization of a liquid, through boiling or evaporation, cools the environment around the liquid as heat flows from the surroundings to the liquid. oC is changed into steam, the heat added (the latent heat of vaporization) is 540 calories for every gram of water. THEORY When heat is added slowly to a chunk of ice that is initially at a temperature below the freezing point (0 °C), it is found that the ice does not change to liquid water instantaneously when the temperature reaches the freezing point. Latent heat is the heat absorbed or released as the result of a phase change. In this video I will explain the concept of latent heats of fusion and vaporization and work out some problems involving heat of fusion and heat of vaporization of water. Latent Heat of Fusion of Nitrogen is 0. A simple, quick, and easy demonstration in which students measure the amount of energy gained by one sample of water as another sample is vaporized. The (latent) heat of vaporization (∆Hvap) also known as the enthalpy of vaporization or evaporation, is the amount of energy (enthalpy) that must be added to a liquid substance, to transform a given quantity of the substance into a gas. Properties of Water: 4. Faraday Soc. Figure $$\PageIndex{1}$$: Heat imparts energy into the system to overcome the intermolecular interactions that hold the liquid together to generate vapor. , 1963, 59, 1544. The data represent a small sub list of all available data in the Dortmund Data Bank. In the case of the latent heat of fusion it is the heat required to change a substance from a solid (ice) to a liquid (water) or vice versa while the latent heat of vaporization from a liquid (water) to a The heat applied to effect a change of state at the boiling point is the latent heat of vaporization. The latent heat of vaporization ΔH corresponds to the amount of energy that must be supplied to the system to convert a unit amount of substance from the liquid to the vapor phase under conditions of equilibrium between the two phases. heat of vaporization - heat absorbed by a unit mass of a material at its boiling point in order to convert the material into a gas at the same temperature heat of vaporisation heat of transformation , latent heat - heat absorbed or radiated during a change of phase at a constant temperature and pressure Latent heat of vaporization of fluids - alcohol, ether, nitrogen, water and more Sponsored Links The input energy required to change the state from liquid to vapor at constant temperature is called the latent heat of vaporization . If we measure the temperature of the substance which is initially solid as we heat it we produce a graph like Figure 1. Difference Between Latent Heat and Sensible Heat. The molar heat of vaporization for water is 40. Define "heat of vaporization". One important difference between evaporation and boiling is that boiling occurs below the surface in the bulk of the liquid and evaporation occurs at the surface of the …Vaporization: CBD & THC Boiling Points. Purpose of the experiment: Here is a simple experiment that can be performed in almost any laboratory. The heat of condensation is defined as the heat released when one mole of the This is a preview of subscription content, log in to check access. Oct 20, 2012 · Enthalpy of vaporization is the heat , calories/gram, absorbed during phase change from liquid to gas. The heat of vaporization is the amount of energy that is required to convert a substance from liquid to gaseous state without changing its temperature. is the heat required to change a gram of substance from a liquid to a gas . e. The heat of vaporization diminishes with increasing pressure, while the boiling point increases. Answer in units of atm Solution: 1) Let us use the Clausius-Clapeyron Equation: Heat and Enthalpy. htmVaporization is the process of a liquid being converted into a gas (or vapor). So the thing is, water starts melting at 0 0 C but to break these hydrogen bonds, it needs more heat. Beforepassingontothemore complete detailsof the experiments and the results, the method of computation ofthelatent heat ofvaporization from the observed Useful Constants: 1 calorie = 4. The latent heat of vaporization is what is commonly referred to as boiling. What is Vaporization? Vaporization is a spontaneous process which occurs at the surface of a liquid. The liquid has a bigger heat capacity. So, I want to say that the first way I did it is correct. h lg = 931 kJ/kg. When a liquid is converted to a gas, the process is called evaporation or boiling; when a solid is converted to a gas, the process is called sublimation. Many animals and plants release a thin layer of water (perspiration in mammals, transpiration on leaves) onto their surface, Latent Heat of Vaporization. 3 x 10 6 J kg-1 M = 1 kg. In case of liquid to gas phase change, this amount of energy is known as the enthalpy of vaporization, (symbol ∆Hvap; unit: J) also known as the (latent) heat of vaporization or heat of evaporation. 805 torr at 453k. The experimental data shown in these pages are freely available and have been published already in the DDB Explorer Edition. kasandbox. what is heat of vaporization Just as it takes a lot of heat to increase the temperature of liquid water, it also takes an unusual amount of heat to vaporize a given amount of water, because hydrogen bonds must be broken in order for the molecules to fly off as gas. show more Use the data to calculate the heat of vaporization deltaHvap of benzaldehyde. To get the most out of your vaporization experience, be sure to grind your material using an herb grinder to break your material down to create more surface area for the heat …The boiling point and the latent heat of vaporization are not the same at all. The variation of heat of vaporization with volatIlity indicated by the curve of figure 3 is so small that within the accuracy with which the correlation with volatility is valid, the average heat of vaporization for each class of gasolines may be taken as the heat of vaporization of any gasoline in that class. Heat of fusion is measure in the unit calories/gram or Joules/mole. 6 cal/gram. On continued heating, the water molecules gain enough energy to break the hydrogen bonds of the H 2 O which gives ice its rigidity. Instead, the ice melts slowly. The latent heat associated with melting a solid or freezing a liquid is called the heat of fusion; that associated with vaporizing a liquid or a solid or condensing a vapour is called the heat of vaporization. DE = energy to heat water to boiling point + energy to change state + energy to raise temperature of steamof the heat of vaporization of water, the results of which are sum­ marized in table 1. Heat of Vaporization. 186 J Latent heat of vaporization of water = 539 cal/g = 2256 kJ/kg Latent heat of fusion of water = 79. If the heat of vaporization was read directly from the properties of the process simulator, a much different result would be obtained because that built-in property is not a differential heat of vaporization but rather a steady-state calculation for a once-through vaporization process as the calculation below shows: Enthalpy of vaporization is the heat , calories/gram, absorbed during phase change from liquid to gas. com/propanevaporization. 6 calories, specific heat 0. the heat absorbed per unit mass of a given material at its boiling point that completely converts the material to a gas at the same temperature: equal to the heat of condensation. [Last Updated: 2/22/2007] Citing this page. Latent Heat of Vaporization. 451 The boiling points of a number of the non-metals are known, but in many cases the molecular formula at the boiling point is Heat of vaporization is related to enthalpy change, while dew point is related to free energy change, i. If the container is closed, this conversion will appear to stop when equilibrium is achieved. Even on a day when the temperature is well blow freezing, laundry will dry on a clothesline because ice evaporates (well, sublimes). evaporation is the change of liquid water to gaseous water at normal temperatures. A simple example would be a puddle of water being heated by the sun, would seem to disappear as the The quality of your herbs and the type of vaporizer you're using have a big effect on what temp setting you should use. southeastern. The phase change absorbs energy/ heat (endothermic) when going from solid to liquid or liquid to gas. Latent heat can be understood as heat energy in hidden form which is supplied or extracted to change the state of a substance without changing its temperature. It uses the wiki concept, so that anyone can make a contribution. In that case, it is referred to as the heat of vaporization, the term 'molar' being eliminated. It is also often referred to as the latent heat of vaporization ( LHv or Lv) The heat of vaporization can apply at much lower temperatures than the BP of water at atmospheric pressure. Specific heat of steam : Quantity of heat necessary to increase the temperature of one Celsius degree on a unit of mass of 1 kg of steam. The Reaction 1/2 N 2 + 1/2 O 2 = NO from Spectroscopic Data. Heat of Vaporization of Acetone The experimental data shown in these pages are freely available and have been published already in the DDB Explorer Edition . 6) Heat of vaporization D. Hypernyms ("heat of vaporization" is a kind of): heat of transformation; latent heat (heat absorbed or radiated during a change of phase at a constant temperature and pressure)The heat that is added to 1 g of a substance at the melting point to break the required bonds to complete the change of state from solid to liquid is the latent heat of melting. As you increase the vaporization temperature past certain thresholds the various terpenoids found in cannabis are released. Enthalpy of vaporization is an important property of any liquid, which states that when heat or enthalpy is given to any liquid substance at a certain pressure and temperature, the liquid changes into a gaseous form. vaporization Contemporary Examples of vaporization But after Bernie (A. Calculate Heat of Vaporization Using Clausius Clapeyron Equation. This correction at the highest temperature (285°K) of the measurements amounted to 0. Latent Heat of Fusion of Hydrogen is 0. In this case, the constant that is used is the heat of vaporization : which also has units of J/g. The potential energy stored in the interatomics forces between molecules needs to be overcome by the kinetic energy the motion of the particles before the substance can change phase. Sometimes the unit J/g is used
proofpile-shard-0030-372
{ "provenance": "003.jsonl.gz:373" }
# What are the various ways to remove chlorine/chloramine from tap water? OK, I actually know how to remove chlorine, but I'd like to have the pros and cons of each method spelled out. I will post an answer and mark it as a community wiki. Please edit it with your input. • anyone know if boiling water til comes to boil removes chlorine? Thanks very much – user8370 Oct 26 '14 at 17:39 • @mdma: The questions aren't ordered, first question can be the last, accordingly with its rank. – Luciano Feb 27 '15 at 19:32 • Do you have to leave it uncovered when you leave the drinking water out for 24 hours to remove the chlorine? – user12046 Apr 3 '15 at 3:28 • I have researched this extensively and am trying to stay away from using Camden tablets, not because they are poisonous, but because I want to be as organic as possible with my beers. I am still curious as to the micron filter size of the charcoal filter necessary. I am using a 5 micron charcoal flow through filter attached to my outside hose, but I am curious about the chloramine and if I need to reduce that size of micron filtration? – user12644 Sep 9 '15 at 17:51 There are several ways that you can remove chlorination from your tap water before you brew with it. This topic should help you to choose which one is right for you. ### Off-gassing If you water contains only chlorine and not chloramine, you can let it sit for 24 hours and the chlorine will dissipate into the environment. Pros: • Free Cons: • Takes a long time • Will not remove chloramine ### Boiling If you water contains only chlorine and not chloramine, you can drive the chlorine off by boiling the water for 15 minutes. Pros: • Faster than waiting for it to off-gas at room temperature Cons: • Requires a lot of energy and significant time to boil all of your water before you even start brewing. • Will not remove chloramine ### Filtration A charcoal filter is designed to strip your tap water of chlorine and chloramine, block carbon filters are necessary for effective removal. Pros: • Fast, nearly as fast as your free-running tap • Removes both chlorine and chloramine Cons: • Filters last roughly 2-6 months depending on water usage and cost between $5 and$30+ dollars to replace depending on the system. • Some charcoal filters need to have water running through them for about 5-10 minutes before being used when replaced. This clears out any charcoal dust that may have been generated during shipment. Chlorine and chloramine can be removed from your water by dissolving potassium metabisulfite into it. One campden tablet is enough to dechlorinate 20 gallons of tap water. Pros: • Very fast - as soon as the K-meta is dissolved in the water and stirred, the water is dechlorinated. • Removes both chlorine and chloramine Cons: • Powdered potassium metabisulfite smells harsh. If you catch a whiff of the powder when measuring it out, it stings the nostrils not unlike sex panther (Anchorman pop culture reference). • Excellent answer! – Denny Conn Feb 19 '11 at 16:54 • Are you quite sure you need to boil the water for 15mins? Im sure if you just boil your tap water in the normal way, for tea etc, then a rapid 3min boil is more than sufficient to rid the water completely of Chlorine! – user3475 May 24 '13 at 16:57 • Another con for K-meta is that some people have sulfite allergies. Residual sulfite will remain in your beer. -- That said, it's still my preferred method! – Tarah May 25 '13 at 13:52 • Other thing about sulfite is that (when used in wine) it gives head hache and a kind of tiredness feeling. I wonder if residuals in water for beer are enough to give the same result. – Paolo May 27 '13 at 14:40 • I heated the water (about 1 gallon) and measured its chlorine for each 10C (20F), and when reached about 60C (140F) the chlorine disappeared, before boiling. Just an empirical result. – Luciano Dec 20 '14 at 21:33 According to the New York City water report (page 20), all you need to do is transfer the water between two vessels 10 times to remove chlorine. I have been using this method for all of my homebrews by filling a 12 quart pot with tap water and transferring back-and-forth between a second 12 quart pot, lifting the pot as I pour as high above my head as I can to maximize splashing (to comfortably tilt the heavy pot as you empty it, lift with the heel of your palms underneath the handles, rather than grasping the handles from above with your fingers curled underneath). Place the receiving pot in the sink in case you miss. I then store the water in gallon jugs and repeat the process until I have the desired amount of strike and sparge water collected for my batch. • This has the advantage of being a good show for any on-lookers! Like some spectacular voodoo blessing at the beginning of your beer conjuring efforts. ...always leave them guessing! – Henry Taylor Apr 3 '15 at 12:04 Bought a zero water filter pitcher for about \$30 or so.works very well, just takes a long time to fill. I was told to run the water through my Britta pitcher 2 times and let it sit out. I also have a water purifier for all water and then use water from refrigerator which is also purified. Then I let it sit out for 24 hours. I hope this is sufficient. My personal experience is that the best way to de-chlorinate water is by stripping with fresh air .In the proposed arrangement, water shall flow from the top of the stripper vessel equipped with 06 sieve trays while pressurized air would be blown from the bottom. At the top a vent should be available to vent Cl2 and air.By this method we can de-chlorinate as well as oxygenate our portable and for fish and other use. We have been using the zero water pitcher for 2 years. It came with a 'water quality tester' which tells you the total dissolved solids (inorganic materials and substances..ick) which are commonly found in drinking water. Chlorine would be included, but not specified by itself. My problem is de-chlorinating water for my garden/plants. • This answer doesn't really address the topic asked about - specifically, how to remove chlorine. – BrianV Apr 8 '14 at 15:23 I dissolve a few grains of vitamin C powder in my bath water and that works. Not so easy for washing up though as problem with rinsing! My municipal water supply is heavily chlorinated. Some mornings, the smell is exceedingly prominent after flushing the toilet. I purchased a tablet splitter from a local pharmacy, and use a quarter of a Camden tablet for each 5 gallon brew. The result is no sign of either chlorine or chloramine contamination.
proofpile-shard-0030-373
{ "provenance": "003.jsonl.gz:374" }
# Difference in spectrum between the damped harmonic oscillator and a HO in thermal equilibrium with a bath? I'm considering a damped, NOT driven harmonic oscillator, more specific an exponentially decaying oscillation. I would like to know the power spectral density of the signal. What I did so far was taking the signal x(t)= $\exp(-\Gamma^{-1}|t|)\cos \omega t$ and taking the fourier transform of this signal. This gives me the following spectrum (i.e. the signal x(t) displayed in frequency space): two lorentzian peaks of width $\Gamma$, one centered at $\omega$ and one at $-\omega$. Two problems with this: 1. Maybe it is helpful to know what spectrum I expected to get: a lorentzian ONLY around $-\omega$. 2. If I would do this thought experiment the other way, i.e. see a spectrum conisting of two lorentzian peaks of width $\Gamma$, one centered at $\omega$ and one at $-\omega$, I would interpretate this as a harmonic oscillator that is in thermal equilibrium and is certainly NOT damped (i.e. having an amplitude decaying to zero on a timescale relating to $\Gamma^{-1}$). I would interprete the lorentzian around $-\omega$ as dissipation of energy, BUT I would interpret the lorentzian around $\omega$ as the reverse process, namely (re)absorption of energy: since the spectrum is symmetric for $\omega \to - \omega$, I would say indeed there are fluctuations, i.e. heat or energy is exchanged between the oscillator and the bath BUT in a noisy way: the signal decays a bit (dissipates energy to the bath), then regains amplitude/energy (absorbs energy from the bath) and this goes on in a fluctuating way. 3. The signal x(t)= $\exp(-\Gamma^{-1}|t|)\cos \omega t$ is as you may have noticed not the exact description of an exponentially decaying oscillation. They only correspond for times t>0. I used this description because I do not know how to take the fourier transform of $x(t)= \exp(-\Gamma^{-1}t)\cos \omega t$. Not sure if this is relevant or not. P.S.: I stumbled upon this problem while reading into quantum optics (or rather cavity optomechanics), where these spectra are ubiquitous but never really justified, and -as it seems to me- they appear in contradictory contexts. I have in mind specifically the review paper 'Cavity Optomechanics' by Aspelmeyer (2014).
proofpile-shard-0030-374
{ "provenance": "003.jsonl.gz:375" }
# Traveling to other worlds 1. Nov 12, 2007 ### TomMac321 you will have to excuse my spelliing im 14 and am dislecsic but frome what i understand so was Einstein. ' i have been doing indipendent reserch in the phisics area. and my phisics teacher hasent been much help he is a real text book guy i no there are lots of theres on traviling to far away galicsys. and i am wondering if enyone knows if you could travil faser that the speed of light or even faster than that to make it posible to travil bitween planits i have already been looking at the wormhole there bt even antmatter couldent perduse anuf negitive energy to keep it stable and thats just if it exsists i the tere of binding space looks interesting but some pinsilpushing @ my schoolse sci department told me that would be inposible if anyone has any other theres or anything that could help shed light on this that would be grate 2. Nov 12, 2007 ### wysard There is no way to travel faster than the speed of light and stay in this time-space continuum. Period. That said, starting I believe with Dirac there have been a number of theories on how to warp space time, or to sequester a ship outside space time in it's own little bubble for instance. But while the math may work a quick look at the numbers shows that you get nothing for free. In fact the energy requirements for most of them are simply monstrous. And even if we could create and contain that much energy, we still have no idea how to engineer the machinery to channel and control it. Sorry to burst your bubble, but unless you are talking just about the theoretical or science fiction realms the only way we actually know to get to any other solar system, let alone galaxy would be by building generation ships. And it's likely to stay that way until some smart cookie comes up with a better method of propulsion than chucking your fuel overboard. 3. Nov 12, 2007 ### TomMac321 ok so as far ass travil at the spped of light gose thats a no and even if you could i have been thinking and your mass would prob increse resolting in painful and comfusing death but say the worhole there was true you wouldent actualy any faster just the distance you have to travi would change 1 do worholes exsist / oveusly an apinon question 2 is there a way to keep it stable 3 is there any other gusesis as to how to travil among the stars 4. Nov 12, 2007 ### TomMac321 space travil say the worhole there was true you wouldent actualy any faster just the distance you have to travi would change 1 do worholes exsist / oveusly an apinon question 2 is there a way to keep it stable 3 is there any other gusesis as to how to travil among the stars 5. Nov 12, 2007 ### ranger 6. Nov 12, 2007 ### TomMac321 wurmholes ok so as far as travil at the speed of light gose thats a no and even if you could i have been thinking and your mass would prob increse resolting in painful and comfusing death but say the worhole there was true you wouldent actualy any faster just the distance you have to travi would change 1 do worholes exsist / oveusly an apinon question 2 is there a way to keep it stable 3 is there any other gusesis as to how to travil among the stars 7. Nov 12, 2007 ### pixel01 First, you should check the spelling. 8. Nov 12, 2007 ### TomMac321 im 14 and dislecsic i cant spell vary good 9. Nov 12, 2007 ### cristo Staff Emeritus Welcome to the forums, Tom. Whilst it is possible to make out what you are trying to say, it is very difficult to read. I understand that spelling is not everyone's strong point, but you should get into the habit of trying your hardest to be comprehensible. Perhaps you could download the web-browser Mozilla Firefox. It's free, and contains a built in spell checker. As for your questions, they are rather speculative, especially the last one, which is pretty impossible to answer! Wormholes are theoretical objects that do crop up in some spacetimes of general relativity, but one cannot say whether they exist in nature or not. 10. Nov 12, 2007 ### jdogg0075 Hey thats cool and all. My chemistry teacher is dyslexic. Firefox does help i has a built in feature like some one said i forgot who. As far as my ideas on the topic. In my heart and the back of my brain i constantly want to beak the limit of light speed. But so far i will have to accept the theory because i do not have a better one to input. But i will tell you one thing. Don't ever accept a theory to be true. To many people assume theory's to be true and this will be the downfall of physics. I'm not saying oh newton is totally wrong or any other theory. But I encourage you to explore these ideas even more, test them try them. Theory's are just that theories. You have to test them constantly for them to be true. Sorry for the blabbering. Any ways my ideas on these topics. 1. Achieving speed of light is possible to me. No mass is pure energy. Mass is energy. Keeping energy in an intelligent life form is the problem. Solve that and you win. But cool thing about this idea is. Not only time would stop. But the world would be a totally new one where velocity doesent matter. No time no velocity. 2. Antimatter exits. It is constantly being made in the vacuum and then being destroyed. This is because you may not notice it but it always pops in and out of pairs. Always comes with one anti particle and one particle. Then before you realize it its gone. But where did all the antimatter go that corresponds with all the matter in the universe? I leave on that quote. I hope you didn't mind me talking. -Justin 11. Nov 12, 2007 ### jdogg0075 Hi Chris nice to meet you I'm Justin, By people i mean physicists and by physicists i mean ANY physicists, i think Chris you can agree absolutely that people are wrong. Theory's have been accepted that are wrong. Throughout history. I think it is narrow minded for someone to accept someone else's idea with out testing it. And by the "downfall of physics" i should be more specific. Im talking about the new theory's. Ones above and beyond classical mathematics. This of course is my personal opinion but im sure you knew that. Yes i will especially for number 1. But i have to disagree with my number two. If you search antimatter particle pair. Or Casimir effect. To say number two is "unsupported by theoretical argument or experimental evidence. " thats wrong. I understand this, i seriously thought i was talking about physics for most of my post. Hmm maybe not. But there are also forums around the world that can teach dyslexic people how to improve their habits, no offese to you Tom. But i will stick to physics MORE in the future. Thanks. Just trying to elaborate on what i said for ya' Chris. Last edited: Nov 12, 2007 12. Nov 12, 2007 ### jdogg0075 I fully understand. Im being a little misunderstood. I simply was saying antimatter exists. I had a major headache from practice when i wrote this. I probably messed up. And sarcasm is hard to understand over the internet. So i hope Chris didn't take it the wrong way. 13. Nov 12, 2007 ### DaveC426913 I like that phrase. A sobering counterpoint to the "anything is possible don't believe scientists" philosophy that often pervades woo-woo-ist speculation.* * not meant to reflect upon this threead at all, just, I like that comment. 14. Nov 12, 2007
proofpile-shard-0030-375
{ "provenance": "003.jsonl.gz:376" }
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Regul. Chaotic Dyn.: Year: Volume: Issue: Page: Find Regul. Chaotic Dyn., 2016, Volume 21, Issue 1, Pages 1–17 (Mi rcd64) Topological Analysis Corresponding to the Borisov–Mamaev–Sokolov Integrable System on the Lie Algebra $so(4)$ Department of Fundamental Sciences, Azarbaijan Shahid Madani University, 35 Km Tabriz-Maragheh Road, Tabriz, Iran Abstract: In 2001, A. V. Borisov, I. S. Mamaev, and V. V. Sokolov discovered a new integrable case on the Lie algebra $so(4)$. This is a Hamiltonian system with two degrees of freedom, where both the Hamiltonian and the additional integral are homogenous polynomials of degrees 2 and 4, respectively. In this paper, the topology of isoenergy surfaces for the integrable case under consideration on the Lie algebra $so(4)$ and the critical points of the Hamiltonian under consideration for different values of parameters are described and the bifurcation values of the Hamiltonian are constructed. Also, a description of bifurcation complexes and typical forms of the bifurcation diagram of the system are presented. Keywords: topology, integrable Hamiltonian systems, isoenergy surfaces, critical set, bifurcation diagram, bifurcation complex, periodic trajectory DOI: https://doi.org/10.1134/S1560354716010019 References: PDF file   HTML file Bibliographic databases: MSC: 37Jxx, 70H06, 70E50, 70G40, 70H14 Accepted:20.12.2015 Language: Citation: Rasoul Akbarzadeh, “Topological Analysis Corresponding to the Borisov–Mamaev–Sokolov Integrable System on the Lie Algebra $so(4)$”, Regul. Chaotic Dyn., 21:1 (2016), 1–17 Citation in format AMSBIB \Bibitem{Akb16} \by Rasoul Akbarzadeh \paper Topological Analysis Corresponding to the Borisov–Mamaev–Sokolov Integrable System on the Lie Algebra $so(4)$ \jour Regul. Chaotic Dyn. \yr 2016 \vol 21 \issue 1 \pages 1--17 \mathnet{http://mi.mathnet.ru/rcd64} \crossref{https://doi.org/10.1134/S1560354716010019} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=3457073} \zmath{https://zbmath.org/?q=an:06580139} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000373028300001} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-84957586219} • http://mi.mathnet.ru/eng/rcd64 • http://mi.mathnet.ru/eng/rcd/v21/i1/p1 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. Pavel E. Ryabov, Andrej A. Oshemkov, Sergei V. Sokolov, “The Integrable Case of Adler – van Moerbeke. Discriminant Set and Bifurcation Diagram”, Regul. Chaotic Dyn., 21:5 (2016), 581–592 2. A. A. Oshemkov, P. E. Ryabov, S. V. Sokolov, “Explicit determination of certain periodic motions of a generalized two-field gyrostat”, Russ. J. Math. Phys., 24:4 (2017), 517–525 3. P. E. Ryabov, “Explicit integration of the system of invariant relations for the case of M. Adler and P. van Moerbeke”, Dokl. Math., 95:1 (2017), 17–20 4. R. Akbarzadeh, “The topology of isoenergetic surfaces for the Borisov–Mamaev–Sokolov integrable case on the Lie algebra $so(3,1)$”, Theoret. and Math. Phys., 197:3 (2018), 1727–1736 • Number of views: This page: 166 References: 20
proofpile-shard-0030-376
{ "provenance": "003.jsonl.gz:377" }
# Preprocessing¶ Written by Luke Chang Being able to study brain activity associated with cognitive processes in humans is an amazing achievement. However, as we have noted throughout this course, there is an extraordinary amount of noise and a very low levels of signal, which makes it difficult to make inferences about the function of the brain using this BOLD imaging. A critical step before we can perform any analyses is to do our best to remove as much of the noise as possible. The series of steps to remove noise comprise our neuroimaging data preprocessing pipeline. See slides on our preprocessing lecture here. In this lab, we will go over the basics of preprocessing fMRI data using the fmriprep preprocessing pipeline. We will cover: • Image transformations • Head motion correction • Spatial Normalization • Spatial Smoothing There are other preprocessing steps that are also common, but not necessarily performed by all labs such as slice timing and distortion correction. We will not be discussing these in depth outside of the videos. Let’s start with watching a short video by Martin Lindquist to get a general overview of the main steps of preprocessing and the basics of how to transform images and register them to other images. from IPython.display import YouTubeVideo YouTubeVideo('Qc3rRaJWOc4') ## Image Transformations¶ Ok, now let’s dive deeper into how we can transform images into different spaces using linear transformations. Recall from our introduction to neuroimaging data lab, that neuroimaging data is typically stored in a nifti container, which contains a 3D or 4D matrix of the voxel intensities and also an affine matrix, which provides instructions for how to transform the matrix into another space. Let’s create an interactive plot using ipywidgets so that we can get an intuition for how these affine matrices can be used to transform a 3D image. We can move the sliders to play with applying rigid body transforms to a 3D cube. A rigid body transformation has 6 parameters: translation in x,y, & z, and rotation around each of these axes. The key thing to remember is that a rigid body transform doesn’t allow the image to be fundamentally changed. A full 12 parameter affine transformation adds an additional 3 parameters each for scaling and shearing, which can change the shape of the cube. Try moving some of the sliders around. Note that the viewer is a little slow. Each time you move a slider it is applying an affine transformation to the matrix and re-plotting. Translation moves the cube in x, y, and z dimensions. We can also rotate the cube around the x, y, and z axes where the origin is the center point. Continuing to rotate around the point will definitely lead to the cube leaving the current field of view, but it will come back if you keep rotating it. You’ll notice that every time we change the slider and apply a new affine transformation that the cube gets a little distorted with aliasing. Often we need to interpolate the image after applying a transformation to fill in the gaps after applying a transformation. It is important to keep in mind that every time we apply an affine transformation to our images, it is actually not a perfect representation of the original data. Additional steps like reslicing, interpolation, and spatial smoothing can help with this. %matplotlib inline from mpl_toolkits import mplot3d import numpy as np import matplotlib.pyplot as plt from nibabel.affines import apply_affine, from_matvec, to_matvec from scipy.ndimage import affine_transform, map_coordinates import nibabel as nib from ipywidgets import interact, FloatSlider def plot_rigid_body_transformation(trans_x=0, trans_y=0, trans_z=0, rot_x=0, rot_y=0, rot_z=0): '''This plot creates an interactive demo to illustrate the parameters of a rigid body transformation''' fov = 30 radius = 10 x, y, z = np.indices((fov, fov, fov)) cube = ((x > fov//2 - radius//2) & (x < fov//2 + radius//2)) & ((y > fov//2 - radius//2) & (y < fov//2 + radius//2)) & ((z > fov//2 - radius//2) & (z < fov//2 + radius//2 )) cube = cube.astype(int) vec = np.array([trans_x, trans_y, trans_z]) rot_x = np.radians(rot_x) rot_y = np.radians(rot_y) rot_z = np.radians(rot_z) rot_axis1 = np.array([[1, 0, 0], [0, np.cos(rot_x), -np.sin(rot_x)], [0, np.sin(rot_x), np.cos(rot_x)]]) rot_axis2 = np.array([[np.cos(rot_y), 0, np.sin(rot_y)], [0, 1, 0], [-np.sin(rot_y), 0, np.cos(rot_y)]]) rot_axis3 = np.array([[np.cos(rot_z), -np.sin(rot_z), 0], [np.sin(rot_z), np.cos(rot_z), 0], [0, 0, 1]]) rotation = rot_axis1 @ rot_axis2 @ rot_axis3 affine = from_matvec(rotation, vec) i_coords, j_coords, k_coords = np.meshgrid(range(cube.shape[0]), range(cube.shape[1]), range(cube.shape[2]), indexing='ij') coordinate_grid = np.array([i_coords, j_coords, k_coords]) coords_last = coordinate_grid.transpose(1, 2, 3, 0) transformed = apply_affine(affine, coords_last) coords_first = transformed.transpose(3, 0, 1, 2) fig = plt.figure(figsize=(15, 12)) ax = plt.axes(projection='3d') ax.voxels(map_coordinates(cube, coords_first)) ax.set_xlabel('x', fontsize=16) ax.set_ylabel('y', fontsize=16) ax.set_zlabel('z', fontsize=16) interact(plot_rigid_body_transformation, trans_x=FloatSlider(value=0, min=-10, max=10, step=1), trans_y=FloatSlider(value=0, min=-10, max=10, step=1), trans_z=FloatSlider(value=0, min=-10, max=10, step=1), rot_x=FloatSlider(value=0, min=0, max=360, step=15), rot_y=FloatSlider(value=0, min=0, max=360, step=15), rot_z=FloatSlider(value=0, min=0, max=360, step=15)) <function __main__.plot_rigid_body_affine_transformation(trans_x=0, trans_y=0, trans_z=0, rot_x=0, rot_y=0, rot_z=0)> Ok, so what’s going on behind the sliders? Let’s borrow some of the material available in the nibabel documentation to understand how these transformations work. The affine matrix is a way to transform images between spaces. In general, we have some voxel space coordinate $$(i, j, k)$$, and we want to figure out how to remap this into a reference space coordinate $$(x, y, z)$$. It can be useful to think of this as a coordinate transform function $$f$$ that accepts a voxel coordinate in the original space as an input and returns a coordinate in the output reference space: $(x, y, z) = f(i, j, k)$ In theory $$f$$ could be a complicated non-linear function, but in practice we typically assume that the relationship between $$(i, j, k)$$ and $$(x, y, z)$$ is linear (or affine), and can be encoded with linear affine transformations comprising translations, rotations, and zooms. Scaling (zooming) in three dimensions can be represented by a diagonal 3 by 3 matrix. Here’s how to zoom the first dimension by $$p$$, the second by $$q$$ and the third by $$r$$ units: $\begin{split} \begin{bmatrix} x\\ y\\ z \end{bmatrix} \quad = \quad \begin{bmatrix} p & i\\ q & j\\ r & k \end{bmatrix} \quad = \quad \begin{bmatrix} p & 0 & 0 \\ 0 & q & 0 \\ 0 & 0 & r \end{bmatrix} \quad \begin{bmatrix} i\\ j\\ k \end{bmatrix} \end{split}$ A rotation in three dimensions can be represented as a 3 by 3 rotation matrix wikipedia rotation matrix. For example, here is a rotation by $$\theta$$ radians around the third array axis: $\begin{split} \begin{bmatrix} x \\ y \\ z \end{bmatrix} \quad = \quad \begin{bmatrix} \cos(\theta) & -\sin(\theta) & 0 \\ \sin(\theta) & \cos(\theta) & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \quad \begin{bmatrix} i \\ j \\ k \end{bmatrix} \end{split}$ This is a rotation by $$\phi$$ radians around the second array axis: $\begin{split} \begin{bmatrix} x \\ y \\ z \\ \end{bmatrix} \quad = \quad \begin{bmatrix} \cos(\phi) & 0 & \sin(\phi) \\ 0 & 1 & 0 \\ -\sin(\phi) & 0 & \cos(\phi) \\ \end{bmatrix} \quad \begin{bmatrix} i \\ j \\ k \end{bmatrix} \end{split}$ A rotation of $$\gamma$$ radians around the first array axis: $\begin{split} \begin{bmatrix} x\\ y\\ z \end{bmatrix} \quad = \quad \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos(\gamma) & -\sin(\gamma) \\ 0 & \sin(\gamma) & \cos(\gamma) \\ \end{bmatrix} \quad \begin{bmatrix} i \\ j \\ k \end{bmatrix} \end{split}$ Zoom and rotation matrices can be combined by matrix multiplication. Here’s a scaling of $$p, q, r$$ units followed by a rotation of $$\theta$$ radians around the third axis followed by a rotation of $$\phi$$ radians around the second axis: $\begin{split} \begin{bmatrix} x \\ y \\ z \end{bmatrix} \quad = \quad \begin{bmatrix} \cos(\phi) & 0 & \sin(\phi) \\ 0 & 1 & 0 \\ -\sin(\phi) & 0 & \cos(\phi) \\ \end{bmatrix} \quad \begin{bmatrix} \cos(\theta) & -\sin(\theta) & 0 \\ \sin(\theta) & \cos(\theta) & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \quad \begin{bmatrix} p & 0 & 0 \\ 0 & q & 0 \\ 0 & 0 & r \\ \end{bmatrix} \quad \begin{bmatrix} i\\ j\\ k\\ \end{bmatrix} \end{split}$ This can also be written: $\begin{split} M \quad = \quad \begin{bmatrix} \cos(\phi) & 0 & \sin(\phi) \\ 0 & 1 & 0 \\ -\sin(\phi) & 0 & \cos(\phi) \\ \end{bmatrix} \quad \begin{bmatrix} \cos(\theta) & -\sin(\theta) & 0 \\ \sin(\theta) & \cos(\theta) & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \quad \begin{bmatrix} p & 0 & 0 \\ 0 & q & 0 \\ 0 & 0 & r \\ \end{bmatrix} \end{split}$ $\begin{split} \begin{bmatrix} x \\ y \\ z \end{bmatrix} \quad = \quad M \quad \begin{bmatrix} i \\ j \\ k \end{bmatrix} \end{split}$ This might be obvious because the matrix multiplication is the result of applying each transformation in turn on the coordinates output from the previous transformation. Combining the transformations into a single matrix $$M$$ works because matrix multiplication is associative – $$ABCD = (ABC)D$$. A translation in three dimensions can be represented as a length 3 vector to be added to the length 3 coordinate. For example, a translation of $$a$$ units on the first axis, $$b$$ on the second and $$c$$ on the third might be written as: $\begin{split} \begin{bmatrix} x \\ y \\ z \end{bmatrix} \quad = \quad \begin{bmatrix} i \\ j \\ k \end{bmatrix} \quad + \quad \begin{bmatrix} a \\ b \\ c \end{bmatrix} \end{split}$ We can write our function $$f$$ as a combination of matrix multiplication by some 3 by 3 rotation / zoom matrix $$M$$ followed by addition of a 3 by 1 translation vector $$(a, b, c)$$ $\begin{split} \begin{bmatrix} x \\ y \\ z \end{bmatrix} \quad = \quad M \quad \begin{bmatrix} i \\ j \\ k \end{bmatrix} \quad + \quad \begin{bmatrix} a \\ b \\ c \end{bmatrix} \end{split}$ We could record the parameters necessary for $$f$$ as the 3 by 3 matrix, $$M$$ and the 3 by 1 vector $$(a, b, c)$$. In fact, the 4 by 4 image affine array includes this exact information. If $$m_{i,j}$$ is the value in row $$i$$ column $$j$$ of matrix $$M$$, then the image affine matrix $$A$$ is: $\begin{split} A \quad = \quad \begin{bmatrix} m_{1,1} & m_{1,2} & m_{1,3} & a \\ m_{2,1} & m_{2,2} & m_{2,3} & b \\ m_{3,1} & m_{3,2} & m_{3,3} & c \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{split}$ Why the extra row of $$[0, 0, 0, 1]$$? We need this row because we have rephrased the combination of rotations / zooms and translations as a transformation in homogenous coordinates (see wikipedia homogenous coordinates). This is a trick that allows us to put the translation part into the same matrix as the rotations / zooms, so that both translations and rotations / zooms can be applied by matrix multiplication. In order to make this work, we have to add an extra 1 to our input and output coordinate vectors: $\begin{split} \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix} \quad = \quad \begin{bmatrix} m_{1,1} & m_{1,2} & m_{1,3} & a \\ m_{2,1} & m_{2,2} & m_{2,3} & b \\ m_{3,1} & m_{3,2} & m_{3,3} & c \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \quad \begin{bmatrix} i \\ j \\ k \\ 1 \end{bmatrix} \end{split}$ This results in the same transformation as applying $$M$$ and $$(a, b, c)$$ separately. One advantage of encoding transformations this way is that we can combine two sets of rotations, zooms, translations by matrix multiplication of the two corresponding affine matrices. In practice, although it is common to combine 3D transformations using 4 x 4 affine matrices, we usually apply the transformations by breaking up the affine matrix into its component $$M$$ matrix and $$(a, b, c)$$ vector and doing: $\begin{split} \begin{bmatrix} x \\ y \\ z \end{bmatrix} \quad = \quad M \quad \begin{bmatrix} i \\ j \\ k \end{bmatrix} \quad + \quad \begin{bmatrix} a \\ b \\ c \end{bmatrix} \end{split}$ As long as the last row of the 4 by 4 is $$[0, 0, 0, 1]$$, applying the transformations in this way is mathematically the same as using the full 4 by 4 form, without the inconvenience of adding the extra 1 to our input and output vectors. You can think of the image affine as a combination of a series of transformations to go from voxel coordinates to mm coordinates in terms of the magnet isocenter. Here is the EPI affine broken down into a series of transformations, with the results shown on the localizer image: Applying different affine transformations allows us to rotate, reflect, scale, and shear the image. ## Cost Functions¶ Now that we have learned how affine transformations can be applied to transform images into different spaces, how can we use this to register one brain image to another image? The key is to identify a way to quantify how aligned the two images are to each other. Our visual systems are very good at identifying when two images are aligned, however, we need to create an alignment measure. These measures are often called cost functions. There are many different types of cost functions depending on the types of images that are being aligned. For example, a common cost function is called minimizing the sum of the squared differences and is similar to how regression lines are fit to minimize deviations from the observed data. This measure works best if the images are of the same type and have roughly equivalent signal intensities. Let’s create another interactive plot and find the optimal X & Y translation parameters that minimize the difference between a two-dimensional target image to a reference image. def plot_affine_cost(trans_x=0, trans_y=0): '''This function creates an interactive demo to highlight how a cost function works in image registration.''' fov = 30 radius = 15 x, y = np.indices((fov, fov)) square1 = (x < radius-2) & (y < radius-2) square2 = ((x > fov//2 - radius//2) & (x < fov//2 + radius//2)) & ((y > fov//2 - radius//2) & (y < fov//2 + radius//2)) square1 = square1.astype(float) square2 = square2.astype(float) vec = np.array([trans_y, trans_x]) affine = from_matvec(np.eye(2), vec) i_coords, j_coords = np.meshgrid(range(square1.shape[0]), range(square1.shape[1]), indexing='ij') coordinate_grid = np.array([i_coords, j_coords]) coords_last = coordinate_grid.transpose(1, 2, 0) transformed = apply_affine(affine, coords_last) coords_first = transformed.transpose(2, 0, 1) transformed_square = map_coordinates(square1, coords_first) f,a = plt.subplots(ncols=3, figsize=(15, 5)) a[0].imshow(transformed_square) a[0].set_xlabel('x', fontsize=16) a[0].set_ylabel('y', fontsize=16) a[0].set_title('Target Image', fontsize=18) a[1].imshow(square2) a[1].set_xlabel('x', fontsize=16) a[1].set_ylabel('y', fontsize=16) a[1].set_title('Reference Image', fontsize=18) point_x = deepcopy(trans_x) point_y = deepcopy(trans_y) sse = np.sum((transformed_square - square2)**2) a[2].bar(0, sse) a[2].set_ylim([0, 350]) a[2].set_ylabel('SSE', fontsize=18) a[2].set_xlabel('Cost Function', fontsize=18) a[2].set_xticks([]) a[2].set_title(f'Parameters: ({int(trans_x)},{int(trans_y)})', fontsize=20) plt.tight_layout() interact(plot_affine_cost, trans_x=FloatSlider(value=0, min=-30, max=0, step=1), trans_y=FloatSlider(value=0, min=-30, max=0, step=1)) <function __main__.plot_affine_cost(trans_x=0, trans_y=0)> You probably had to move the sliders around back and forth until you were able to reduce the sum of squared error to zero. This cost function increases exponentially the further you are away from your target. The process of minimizing (or sometimes maximizing) cost functions to identify the best fitting parameters is called optimization and is a concept that is core to fitting models to data across many different disciplines. Cost Function Use Case Example Sum of Squared Error Images of same modality and scaling Two T2* images Normalized correlation Images of same modality two T1 images Correlation ratio Any modality T1 and FLAIR Mutual information or normalized mutual information Any modality T1 and CT Boundary Based Registration Images with some contrast across boundaries of interest EPI and T1 ## Realignment¶ Now let’s put everything we learned together to understand how we can correct for head motion in functional images that occurred during a scanning session. It is extremely important to make sure that a specific voxel has the same 3D coordinate across all time points to be able to model neural processes. This of course is made difficult by the fact that participants move during a scanning session and also in between runs. Realignment is the preprocessing step in which a rigid body transformation is applied to each volume to align them to a common space. One typically needs to choose a reference volume, which might be the first, middle, or last volume, or the mean of all volumes. Let’s look at an example of the translation and rotation parameters after running realignment on our first subject. import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from bids import BIDSLayout, BIDSValidator import os data_dir = '../data/localizer' layout = BIDSLayout(data_dir, derivatives=True) data = pd.read_csv(layout.get(subject='S01', scope='derivatives', extension='.tsv')[0].path, sep='\t') f,a = plt.subplots(ncols=2, figsize=(15,5)) data.loc[:,['trans_x','trans_y','trans_z']].plot(ax=a[0]) a[0].set_ylabel('Translation (mm)', fontsize=16) a[0].set_xlabel('Time (TR)', fontsize=16) a[0].set_title('Translation', fontsize=18) data.loc[:,['rot_x','rot_y','rot_z']].plot(ax=a[1]) a[1].set_ylabel('Rotation (radian)', fontsize=16) a[1].set_xlabel('Time (TR)', fontsize=16) a[1].set_title('Rotation', fontsize=18) Text(0.5, 1.0, 'Rotation') Don’t forget that even though we can approximately put each volume into a similar position with realignment that head motion always distorts the magnetic field and can lead to nonlinear changes in signal intensity that will not be addressed by this procedure. In the resting-state literature, where many analyses are based on functional connectivity, head motion can lead to spurious correlations. Some researchers choose to exclude any subject that moved more than certain amount. Other’s choose to remove the impact of these time points in their data through removing the volumes via scrubbing or modeling out the volume with a dummy code in the first level general linear models. ## Spatial Normalization¶ There are several other preprocessing steps that involve image registration. The main one is called spatial normalization, in which each subject’s brain data is warped into a common stereotactic space. Talaraich is an older space, that has been subsumed by various standards developed by the Montreal Neurological Institute. There are a variety of algorithms to warp subject data into stereotactic space. Linear 12 parameter affine transformation have been increasingly been replaced by more complicated nonlinear normalizations that have hundreds to thousands of parameters. One nonlinear algorithm that has performed very well across comparison studies is diffeomorphic registration, which can also be inverted so that subject space can be transformed into stereotactic space and back to subject space. This is the core of the ANTs algorithm that is implemented in fmriprep. See this overview for more details. Let’s watch another short video by Martin Lindquist and Tor Wager to learn more about the core preprocessing steps. YouTubeVideo('qamRGWSC-6g') There are many different steps involved in the spatial normalization process and these details vary widely across various imaging software packages. We will briefly discuss some of the steps involved in the anatomical preprocessing pipeline implemented by fMRIprep and will be showing example figures from the output generated by the pipeline. First, brains are extracted from the skull and surrounding dura mater. You can check and see how well the algorithm performed by examining the red outline. Next, the anatomical images are segmented into different tissue types, these tissue maps are used for various types of analyses, including providing a grey matter mask to reduce the computational time in estimating statistics. In addition, they provide masks to aid in extracting average activity in CSF, or white matter, which might be used as covariates in the statistical analyses to account for physiological noise. ### Spatial normalization of the anatomical T1w reference¶ fmriprep uses the ANTs to perform nonlinear spatial normaliziation. It is easy to check to see how well the algorithm performed by viewing the results of aligning the T1w reference to the stereotactic reference space. Hover on the panels with the mouse pointer to transition between both spaces. We are using the MNI152NLin2009cAsym template. ### Alignment of functional and anatomical MRI data¶ Next, we can evaluate the quality of alignment of the functional data to the anatomical T1 image. FSL flirt was used to generate transformations from EPI-space to T1w-space - The white matter mask calculated with FSL fast (brain tissue segmentation) was used for BBR. Note that Nearest Neighbor interpolation is used in the reportlets in order to highlight potential spin-history and other artifacts, whereas final images are resampled using Lanczos interpolation. Notice these images are much blurrier and show some distortion compared to the T1s. ## Spatial Smoothing¶ The last step we will cover in the preprocessing pipeline is spatial smoothing. This step involves applying a filter to the image, which removes high frequency spatial information. This step is identical to convolving a kernel to a 1-D signal that we covered in the Signal Processing Basics lab, but the kernel here is a 3-D Gaussian kernel. The amount of smoothing is determined by specifying the width of the distribution (i.e., the standard deviation) using the Full Width at Half Maximum (FWHM) parameter. Why we would want to decrease our image resolution with spatial smoothing after we tried very hard to increase our resolution at the data acquisition stage? This is because this step may help increase the signal to noise ratio by reducing the impact of partial volume effects, residual anatomical differences following normalization, and other aliasing from applying spatial transformation. Here is what a 3D gaussian kernel looks like. def plot_gaussian(sigma=2, kind='surface', cmap='viridis', linewidth=1, **kwargs): '''Generates a 3D matplotlib plot of a Gaussian distribution''' mean=0 domain=10 x = np.arange(-domain + mean, domain + mean, sigma/10) y = np.arange(-domain + mean, domain + mean, sigma/10) x, y = np.meshgrid(x, x) r = (x ** 2 + y ** 2) / (2 * sigma ** 2) z = 1 / (np.pi * sigma ** 4) * (1 - r) * np.exp(-r) fig = plt.figure(figsize=(12, 6)) ax = plt.axes(projection='3d') if kind=='wire': ax.plot_wireframe(x, y, z, cmap=cmap, linewidth=linewidth, **kwargs) elif kind=='surface': ax.plot_surface(x, y, z, cmap=cmap, linewidth=linewidth, **kwargs) else: NotImplemented ax.set_xlabel('x', fontsize=16) ax.set_ylabel('y', fontsize=16) ax.set_zlabel('z', fontsize=16) plt.axis('off') plot_gaussian(kind='surface', linewidth=1) ## fmriprep¶ Throughout this lab and course, you have frequently heard about fmriprep, which is a functional magnetic resonance imaging (fMRI) data preprocessing pipeline that was developed by a team at the Center for Reproducible Research led by Russ Poldrack and Chris Gorgolewski. Fmriprep was designed to provide an easily accessible, state-of-the-art interface that is robust to variations in scan acquisition protocols, requires minimal user input, and provides easily interpretable and comprehensive error and output reporting. Fmriprep performs basic processing steps (coregistration, normalization, unwarping, noise component extraction, segmentation, skullstripping etc.) providing outputs that are ready for data analysis. fmriprep was built on top of nipype, which is a tool to build preprocessing pipelines in python using graphs. This provides a completely flexible way to create custom pipelines using any type of software while also facilitating easy parallelization of steps across the pipeline on high performance computing platforms. Nipype is completely flexible, but has a fairly steep learning curve and is best for researchers who have strong opinions about how they want to preprocess their data, or are working with nonstandard data that might require adjusting the preprocessing steps or parameters. In practice, most researchers typically use similar preprocessing steps and do not need to tweak the pipelines very often. In addition, many researchers do not fully understand how each preprocessing step will impact their results and would prefer if somebody else picked suitable defaults based on current best practices in the literature. The fmriprep pipeline uses a combination of tools from well-known software packages, including FSL_, ANTs_, FreeSurfer_ and AFNI_. This pipeline was designed to provide the best software implementation for each state of preprocessing, and is quickly being updated as methods evolve and bugs are discovered by a growing user base. This tool allows you to easily do the following: • Take fMRI data from raw to fully preprocessed form. • Implement tools from different software packages. • Achieve optimal data processing quality by using the best tools available. • Generate preprocessing quality reports, with which the user can easily identify outliers. • Receive verbose output concerning the stage of preprocessing for each subject, including meaningful errors. • Automate and parallelize processing steps, which provides a significant speed-up from typical linear, manual processing. • More information and documentation can be found at https://fmriprep.readthedocs.io/ ### Running fmriprep¶ Running fmriprep is a (mostly) trivial process of running a single line in the command line specifying a few choices and locations for the output data. One of the annoying things about older neuroimaging software that was developed by academics is that the packages were developed using many different development environments and on different operating systems (e.g., unix, windows, mac). It can be a nightmare getting some of these packages to install on more modern computing systems. As fmriprep uses many different packages, they have made it much easier to circumvent the time-consuming process of installing many different packages by releasing a docker container that contains everything you need to run the pipeline. Unfortunately, our AWS cloud instances running our jupyter server are not equipped with enough computational resources to run fmriprep at this time. However, if you’re interested in running this on your local computer, here is the code you could use to run it in a jupyter notebook, or even better in the command line on a high performance computing environment. import os base_dir = '/Users/lukechang/Dropbox/Dartbrains/Data' data_path = os.path.join(base_dir, 'localizer') output_path = os.path.join(base_dir, 'preproc') work_path = os.path.join(base_dir, 'work') sub = 'S01' subs = [f'S{x:0>2d}' for x in range(10)] for sub in subs: !fmriprep-docker {data_path} {output_path} participant --participant_label sub-{sub} --write-graph --fs-no-reconall --notrack --fs-license-file ~/Dropbox/Dartbrains/License/license.txt --work-dir {work_path} ### Quick primer on High Performance Computing¶ We could run fmriprep on our computer, but this could take a long time if we have a lot of participants. Because we have a limited amount of computational resources on our laptops (e.g., cpus, and memory), we would have to run each participant sequentially. For example, if we had 50 participants, it would take 50 times longer to run all participants than a single one. Imagine if you had 50 computers and ran each participant separate at the same time in parallel across all of the computers. This would allow us to run 50 participants in the same amount of time as a single participant. This is the basic idea behind high performance computing, which contains a cluster of many computers that have been installed in racks. Below is a picture of what Dartmouth’s Discovery cluster looks like: A cluster is simply a collection of nodes. A node can be thought of as an individual computer. Each node contains processors, which encompass multiple cores. Discovery contains 3000+ cores, which is certainly a lot more than your laptop! In order to submit a job, you can create a Portable Batch System (PBS) script that sets up the parameters (e.g., how much time you want your script to run, specifying directory to run, etc) and submits your job to a queue. NOTE: For this class, we will only be using the jupyterhub server, but if you end up working in a lab in the future, you will need to request access to the discovery system using this link. ### fmriprep output¶ You can see a summary of the operations fmriprep performed by examining the .html files in the derivatives/fmriprep folder within the localizer data directory. We will load the first subject’s output file. Spend some time looking at the outputs and feel free to examine other subjects as well. Currently, the first 10 subjects should be available on the jupyterhub. from IPython.display import HTML HTML('sub-S01.html') ../data/localizer/derivatives/fmriprep/sub-01.html ## Limitations of fmriprep¶ In general, we recommend using this pipeline if you want a sensible default. Considerable thought has gone into selecting reasonable default parameters and selecting preprocessing steps based on best practices in the field (as determined by the developers). This is not necessarily the case for any of the default settings in any of the more conventional software packages (e.g., spm, fsl, afni, etc). However, there is an important tradeoff in using this tool. On the one hand, it’s nice in that it is incredibly straightforward to use (one line of code!), has excellent documentation, and is actively being developed to fix bugs and improve the overall functionality. There is also a growing user base to ask questions. Neurostars is an excellent forum to post questions and learn from others. On the other hand, fmriprep, is unfortunately in its current state not easily customizable. If you disagree with the developers about the order or specific preprocessing steps, it is very difficult to modify. Future versions will hopefully be more modular and easier to make custom pipelines. If you need this type of customizability we strongly recommend using nipype over fmriprep. In practice, it’s alway a little bit finicky to get everything set up on a particular system. Sometimes you might run into issues with a specific missing file like the freesurfer license even if you’re not using it. You might also run into issues with the format of the data that might have some conflicts with the bids-validator. In our experience, there is always some frustrations getting this to work, but it’s very nice once it’s done. ## Exercises¶ ### Exercise 1. Inspect HTML output of other participants.¶ For this exercise, you will need to navigate to the derivatives folder containing the fmriprep preprocessed data ../data/data/localizer/derivatives/fmriprep and inspect the html output of other subjects (ie., not ‘S01’). Did the preprocessing steps works? are there any issues with the data that we should be concerned about?
proofpile-shard-0030-377
{ "provenance": "003.jsonl.gz:378" }
The PTS Developer Guide This page offers some guidance for users and developers who wish to adjust or extend PTS capabilities. It is organized in the following topics: # Packages, modules and directories The PTS source code is contained in an online repository and can be obtained as described in The PTS Installation Guide. guide. Your local copy of this repository is usually placed inside a directory called PTS in your home directory. The resulting directory structure may look as follows: ~/PTS pts do band data do do docs simulation ... visual do run The optional run directory may contain input/output files involved in actually running PTS. This information obviously does not belong in the source code repository, which is why the run directory is not inside the pts directory. The contents of the pts directory is an identical clone of the online PTS repository. Immediately inside the pts directory resides a shell script for building the documentation (see Building the documentation) and a number of subdirectories holding the source and documentation files. A PTS package is represented as a top-level subdirectory of the PTS repository. Each Python source file within a package is called a module. PTS has no nested packages. The following table lists some important packages in PTS with an indication of their functionality. Package Description admin Administrative functions, such as listing PTS package dependencies and creating archives for backup purposes band Representing broadband filters, including transmission curve data for a set of standard bands simulation Interfacing with the SKIRT executable, the configuration file, and SKIRT output files (with units) storedtable Converting third-party data to SKIRT stored table format and otherwise accessing files in this format test Performing and reporting on SKIRT functional tests utils Basic utilities for use by other sub-packages visual Visualizing SKIRT results including image frames, SEDs, density cuts, temperature cuts, polarization maps, and more. In addition to the package subdirectories, the following subdirectories may occur in the repository directory hierarchy as needed: Subdirectory Where Presence Description docs Top-level Mandatory Configuration files for building HTML pages from the comment blocks embedded in the PTS source code do Top-level Mandatory Implementation of command line facilities, i.e. locating and executing scripts in do subdirectories do Inside package Optional Command scripts that can be executed directly from the PTS command line data Inside package Optional Data resources required by the module containing this directory # Coding style ## Basic conventions PTS is written in Python 3.7 and, in general, uses the coding style, language capabilities and standard library functions corresponding to that language version. As an important exception to this rule, the comment blocks preceding classes and functions in the code use the ## style as opposed to the more pythonic doc-string style. The main reason is that Doxygen does not recognize the special commands for LaTeX contents, extra formatting, and hyperlinks in doc strings. The following table summarize the PTS naming conventions are used for Python language entities: Entity Convention Example Package (directory) All lowercase letters, no separators storedtable Module (file) All lowercase letters, no separators skirtsimulation Class Camel case starting with upper case letter SkiFile Function Camel case starting with lower case letter performSimulation() -> getter name of property backgroundColor() -> setter set + capitalized name of property setBackgroundColor() Variable Camel case starting with lower case letter; or all lowercase letters, no separators nx, fluxDensity Data member Leading underscore plus variable name (all data members are private) _nx, _fluxDensity ## Organizing package functionality Each PTS package (a directory, see Packages, modules and directories) exposes all public functions and classes (i.e. those intended for use outside of the package) at the package level. The functionality is implemented in various modules (python source files) residing inside the package. The initialization file for each package places the public names into the package namespace using explicit imports. ## Importing packages Default style for importing external packages (including standard-library packages): import some.package # each reference must include full package name External packages imported with a local name: import astropy.constants as const import astropy.io.fits as fits import astropy.units as u import lxml.etree as etree import matplotlib.pyplot as plt import numpy as np Importing other PTS packages (or same package from within do subdirectory): import pts.admin as adm import pts.band as bnd import pts.do as do import pts.simulation as sm import pts.storedtable as stab import pts.utils as ut import pts.visual as vis Importing symbols from within the same package, including initialization file: from .module import name # default style is to use explicit import from .module import * # exceptional style, for example in conversionspec.py ## External dependencies Any PTS code may depend on any of the standard Python 3.7 packages without further mention. In addition, some of the PTS facilities may require non-standard Python packages to be installed. Developers are urged to avoid additional dependencies where possible, and to use only packages that are readily available from the common distribution channels. See Required Python packages for a list of the required non-standard packages at the time of writing. To obtain a list of the current package dependencies, make sure that PTS is properly installed and enter the following terminal command: pts list_dependencies # Building the documentation The PTS reference documentation is generated from the Python source files (and from the extra files in the docs directory) by Doxygen. For information on how to install this free application, refer to the SKIRT installation guide (section "Installing the documentation generator" on page "Develop using Qt Creator"). When you add or adjust code, it is important to provide proper documentation in the header file, in Doxygen format. To verify that everything looks as intended, especially when including formulas in mathematical notation, you should build the HTML documentation and open the resulting page(s) in a web browser. The git directory contains a shell script for building the documentation. The script is designed for use on Mac OS X and will need to be adjusted for use on other systems. For example, the absolute path to the Doxygen executable will need to be updated, and the html.doxygen parameter file may need some tweaking as well. Before invoking the script for the first time, you may need to make it executable as follows: cd ~/PTS/git chmod +rx makeHTML.sh To build the HTML reference documentation, enter: cd ~/PTS/git ./makeHTML.sh The resulting HTML files are placed in the html directory next to (i.e. at the same level as) the git directory. As usual, the file index.html provides the starting point for browsing. The source text for the PTS installation, user and developer guides is maintained in a different repository. For more information about how to edit and publish this documentation, refer to the SKIRT developer guide (section "Additional documentation" on page "Building the documentation") # Contributing to PTS We invite your contributions to PTS and to the SKIRT project in general. A contribution can take many forms, from asking a question to implementing a new feature. More information on how to contribute can be found here:
proofpile-shard-0030-378
{ "provenance": "003.jsonl.gz:379" }
## [0910.4677] Some new views on the low-energy side of gravity Authors: Federico Piazza Date: 26 Oct 2009 Abstract: Common wisdom associates all the unraveled and theoretically challenging aspects of gravity with its UV-completion. However, there appear to be few difficulties afflicting the effective framework for gravity already at low energy that are likely to be detached from the high-energy structure. Those include the black hole information paradox, the cosmological constant problem and the rather involved and fine tuned model building required to explain our cosmological observations. I review some directions of on-going research that aim to generalize and extend the low-energy framework for gravity. #### Oct 29, 2009 0910.4677 (/preprints) 2009-10-29, 09:33 ## [0910.4593] The Close-limit Approximation for Black Hole Binaries with Post-Newtonian Initial Conditions Authors: Alexandre Le Tiec, Luc Blanchet Date: 23 Oct 2009 Abstract: The ringdown phase of a black hole formed from the merger of two orbiting black holes is described by means of the close-limit (CL) approximation starting from second-post-Newtonian (2PN) initial conditions. The 2PN metric of point-particle binaries is formally expanded in CL form and identified with that of a perturbed Schwarzschild black hole. The multipolar coefficients describing the even-parity (or polar) and odd-parity (axial) components of the linear perturbation consistently satisfy the 2PN-accurate perturbative field equations. We use these coefficients to build initial conditions for the Regge-Wheeler and Zerilli wave equations, which we then evolve numerically. The ringdown waveform is obtained in two cases: head-on collision with zero-angular momentum, composed only of even modes, and circular orbits, for which both even and odd modes contribute. In a separate work, this formalism is applied to the study of the gravitational recoil produced during the ringdown phase of coalescing binary black holes. #### Oct 29, 2009 0910.4593 (/preprints) 2009-10-29, 09:33 ## [0910.4594] Gravitational-Wave Recoil from the Ringdown Phase of Coalescing Black Hole Binaries Authors: Alexandre Le Tiec, Luc Blanchet, Clifford M. Will Date: 23 Oct 2009 Abstract: The gravitational recoil or "kick" of a black hole formed from the merger of two orbiting black holes, and caused by the anisotropic emission of gravitational radiation, is an astrophysically important phenomenon. We combine (i) an earlier calculation, using post-Newtonian theory, of the kick velocity accumulated up to the merger of two non-spinning black holes, (ii) a "close-limit approximation" calculation of the radiation emitted during the ringdown phase, and based on a solution of the Regge-Wheeler and Zerilli equations using initial data accurate to second post-Newtonian order. We prove that ringdown radiation produces a significant "anti-kick". Adding the contributions due to inspiral, merger and ringdown phases, our results for the net kick velocity agree with those from numerical relativity to 10-15 percent over a wide range of mass ratios, with a maximum velocity of 180 km/s at a mass ratio of 0.38. #### Oct 29, 2009 0910.4594 (/preprints) 2009-10-29, 09:33 ## [0910.3954] Stellar-mass black holes in star clusters: implications for gravitational wave radiation Authors: Sambaran Banerjee, Holger Baumgardt, Pavel Kroupa Date: 20 Oct 2009 Abstract: We study the dynamics of stellar-mass black holes (BH) in star clusters with particular attention to the formation of BH-BH binaries, which are interesting as sources of gravitational waves (GW). We examine the properties of these BH-BH binaries through direct N-body simulations of star clusters using the GPU-enabled NBODY6 code. We perform simulations of N <= 10ˆ5 Plummer clusters of low-mass stars with an initial population of BHs. Additionally, we do several calculations of star clusters confined within a reflective boundary mimicking only the core of a massive cluster. We find that stellar-mass BHs with masses ~ 10 solar mass segregate rapidly into the cluster core and form a sub-cluster of BHs within typically 0.2 - 0.5 pc radius, which is dense enough to form BH-BH binaries through 3-body encounters. While most BH binaries are ejected from the cluster by recoils received during super-elastic encounters with the single BHs, few of them harden sufficiently so that they can merge via GW emission within the cluster. We find that for clusters with $N \ga 5\times 10ˆ4$, typically 1 - 2 BH-BH mergers occur within them during the first ~ 4 Gyr of evolution. Also for each of these clusters, there are a few escaping BH binaries that can merge within a Hubble time, most of the merger times being within a few Gyr. These results indicate that intermediate-age massive clusters constitute the most important class of candidates for producing dynamical BH-BH mergers. Old globular clusters cannot contribute significantly to the present-day BH-BH merger rate since most of the mergers from them would have occurred earlier. In contrast, young massive clusters are too young to produce significant number of BH-BH mergers. Our results imply significant BH-BH merger detection rates for the proposed "Advanced LIGO" GW detector. (Abridged) #### Oct 23, 2009 0910.3954 (/preprints) 2009-10-23, 09:08 ## [0910.3197] Statistical studies of Spinning Black-Hole Binaries Authors: Carlos O. Lousto, Hiroyuki Nakano, Yosef Zlochower, Manuela Campanelli Date: 16 Oct 2009 Abstract: We study the statistical distributions of the spins of generic black-hole binaries during the inspiral and merger, as well as the distributions of the remnant mass, spin, and recoil velocity. For the inspiral regime, we start with a random uniform distribution of spin directions S1 and S2 and magnitudes S1=S2=0.97 for different mass ratios. Starting from a fiducial initial separation of ri=50m, we perform 3.5PN evolutions down to rf=5m. At this final fiducial separation, we compute the angular distribution of the spins with respect to the final orbital angular momentum, L. We perform 16ˆ4 simulations for six mass ratios between q=1 and q=1/16 and compute the distribution of the angles between L and Delta and L and S, directly related to recoil velocities and total angular momentum. We find a small but statistically significant bias of the distribution towards counter-alignment of both scalar products. To study the merger of black-hole binaries, we turn to full numerical techniques. We introduce empirical formulae to describe the final remnant black hole mass, spin, and recoil velocity for merging black-hole binaries with arbitrary mass ratios and spins. We then evaluate those formulae for randomly chosen directions of the individual spins and magnitudes as well as the binary's mass ratio. We found that the magnitude of the recoil velocity distribution decays as P(v) \exp(-v/2500km/s), <v>=630km/s, and sqrt{<vˆ2> - <v>ˆ2}= 534km/s, leading to a 23% probability of recoils larger than 1000km/s, and a highly peaked angular distribution along the final orbital axis. The final black-hole spin magnitude show a universal distribution highly peaked at Sf/mfˆ2=0.73 and a 25 degrees misalignment with respect to the final orbital angular momentum. #### Oct 23, 2009 0910.3197 (/preprints) 2009-10-23, 09:08 ## [0910.3206] On strong mass segregation around a massive black hole: Implications for lower-frequency gravitational-wave astrophysics Authors: Miguel Preto, Pau Amaro-Seoane Date: 16 Oct 2009 Abstract: We present, for the first time, a clear $N$-body realization of the {\it strong mass segregation} solution for the stellar distribution around a massive black hole. We compare our $N$-body results with those obtained by solving the orbit-averaged Fokker-Planck (FP) equation in energy space. The $N$-body segregation is slightly stronger than in the FP solution, but both confirm the {\it robustness} of the regime of strong segregation when the number fraction of heavy stars is a (realistically) small fraction of the total population. In view of recent observations revealing a dearth of giant stars in the sub-parsec region of the Milky Way, we show that the time scales associated with cusp re-growth are not longer than $(0.1-0.25) \times T_{rlx}(r_h)$. These time scales are shorter than a Hubble time for black holes masses $\mbul \lesssim 4 \times 10ˆ6 M_\odot$ and we conclude that quasi-steady, mass segregated, stellar cusps may be common around MBHs in this mass range. Since EMRI rates scale as $\mbulˆ{-\alpha}$, with $\alpha \in [1\4,1]$, a good fraction of these events should originate from strongly segregated stellar cusps. #### Oct 23, 2009 0910.3206 (/preprints) 2009-10-23, 09:07 ## [0910.4302] Detecting gravitational waves from inspiraling binaries with a network of geographically separated detectors: coherent versus coincident strategies Date: 22 Oct 2009 Abstract: We compare two strategies of multi-detector detection of compact binary inspiral signals, namely, the coincidence and the coherent for the realistic case of geographically separated detectors.We compare the performances of the methods by plotting the receiver operating characteristics (ROC) for the strategies. Several results are derived analytically in order to gain insight. Simulations are performed in order to plot the ROC curves. A single astrophysical source as well as a distribution of sources is considered. We find that the coherent strategy is superior to the two coincident strategies that we consider. Remarkably, the detection probability of the coherent strategy is 50% better than the naive coincident strategy. One the other hand, difference in performance between the coherent strategy and enhanced coincident strategy is not very large. Even in this situation, it is not difficult to perform the real data analysis with the coherent strategy. The bottom line is that the coherent strategy is a good detection strategy. #### Oct 23, 2009 0910.4302 (/preprints) 2009-10-23, 09:07 ## [0910.4152] Interruption of Tidal Disruption Flares By Supermassive Black Hole Binaries Authors: F. K. Liu (1 and 2), S. Li (1), Xian Chen (1 and 3) ((1) Peking University, (2) KIAA at Peking University, (3) University of California at Santa Cruz) Date: 21 Oct 2009 Abstract: Supermassive black hole binaries (SMBHBs) are products of galaxy mergers, and are important in testing LambdaCDM cosmology and locating gravitational-wave-radiation sources. A unique electromagnetic signature of SMBHBs in galactic nuclei is essential in identifying the binaries in observations from the IR band through optical to X-ray. Recently, the flares in optical, UV, and X-ray caused by supermassive black holes (SMBHs) tidally disrupting nearby stars have been successfully used to observationally probe single SMBHs in normal galaxies. In this letter, we investigate the accretion of the gaseous debris of a tidally disrupted star by a SMBHB. Using both stability analysis of three-body systems and numerical scattering experiments, we show that the accretion of stellar debris gas, which initially decays with time $\propto tˆ{-5/3}$, would stop at a time $T_{tr} \simeq \eta T_{b}$. Here $\eta \sim 0.25$ and $T_{b}$ is the orbital period of the SMBHB. After a period of interruption, the accretion recurs discretely at time $T_{r} \simeq \xi T_b$, where $\xi \sim 1$. Both $\eta$ and $\xi$ sensitively depend on the orbital parameters of the tidally disrupted star at the tidal radius and the orbit eccentricity of SMBHB. The interrupted accretion of the stellar debris gas gives rise to an interrupted tidal flare, which could be used to identify SMBHBs in non-active galaxies in the upcoming transient surveys. #### Oct 23, 2009 0910.4152 (/preprints) 2009-10-23, 09:07 ## [0910.4372] Alternative derivation of the response of interferometric gravitational wave detectors Authors: Neil J. Cornish Date: 22 Oct 2009 Abstract: It has recently been pointed out by Finn that the long-standing derivation of the response of an interferometric gravitational wave detector contains several errors. Here I point out that a contemporaneous derivation of the gravitational wave response for spacecraft doppler tracking and pulsar timing avoids these pitfalls, and when adapted to describe interferometers, recovers a simplified version of Finn's derivation. This simplified derivation may be useful for pedagogical purposes. #### Oct 23, 2009 0910.4372 (/preprints) 2009-10-23, 09:07 ## [0906.2901] LISA technology and instrumentation Authors: O. Jennrich Date: 16 Jun 2009 Abstract: This article reviews the present status of the technology and instrumentation for the joint ESA/NASA gravitational wave detector LISA. It briefly describes the measurement principle and the mission architecture including the resulting sensitivity before focussing on a description of the main payload items, such as the interferomtric measurement system, comprising the optical system with the optical bench and the telescope, the laser system, and the phase measurement system; and the disturbance reduction system with the inertial sensor, the charge control system, and the micropropulsion system. The article touches upon the requirements for the different subsystems that need to be fulfilled to obtain the overall sensitivity. #### Oct 22, 2009 0906.2901 (/preprints) 2009-10-22, 19:35 ## [0910.2857] Post-Newtonian methods: Analytic results on the binary problem Authors: Gerhard Schaefer Date: 15 Oct 2009 Abstract: A detailed account is given on approximation schemes to the Einstein theory of general relativity where the iteration starts from the Newton theory of gravity. Two different coordinate conditions are used to represent the Einstein field equations, the generalized isotropic ones of the canonical formalism of Arnowitt, Deser, and Misner and the harmonic ones of the Lorentz-covariant Fock-de Donder approach. Conserved quantities of isolated systems are identified and the Poincaré algebra is introduced. Post-Newtonian expansions are performed in the near and far (radiation) zones. The natural fitting of multipole expansions to post-Newtonian schemes is emphasized. The treated matter models are ideal fluids, pure point masses, and point masses with spin and mass-quadrupole moments modelling rotating black holes. Various Hamiltonians of spinning binaries are presented in explicit forms to higher post-Newtonian orders. The delicate use of black holes in post-Newtonian expansion calculations and of the Dirac delta function in general relativity find discussions. #### Oct 20, 2009 0910.2857 (/preprints) 2009-10-20, 12:23 ## [0910.2758] Emission coordinates for the navigation in space Authors: A. Tartaglia Date: 15 Oct 2009 Abstract: A general approach to the problem of positioning by means of pulsars or other pulsating sources located at infinity is described. The counting of the pulses for a set of different sources whose positions in the sky and periods are assumed to be known, is used to provide null emission, or light, coordinates for the receiver. The measurement of the proper time intervals between successive arrivals of the signals from the various sources is used to give the final localization of the receiver, within an accuracy controlled by the precision of the onboard clock. The deviation from the flat case is discussed, separately considering the different possible causes: local gravitational potential, finiteness of the distance of the source, proper motion of the source, period decay, proper acceleration due to non-gravitational forces. Calculations turn out to be simple and the result is highly positive. The method can also be applied to a constellation of satellites orbiting the Earth. #### Oct 20, 2009 0910.2758 (/preprints) 2009-10-20, 12:23 ## [0910.1780] Gravitational wave background from neutron star phase transition for a new class of equation of state Authors: J. C. N. de Araujo, G. F. Marranghello Date: 9 Oct 2009 Abstract: We study the generation of a stochastic gravitational wave (GW) background produced by a population of neutron stars (NSs) which go over a hadron-quark phase transition in its inner shells. We obtain, for example, that the NS phase transition, in cold dark matter scenarios, could generate a stochastic GW background with a maximum amplitude of $h_{\rm BG} \sim 10ˆ{-24}$, in the frequency band $\simeq 20-2000 {\rm Hz}$ for stars forming at redshifts of up to $z\simeq 20.$ We study the possibility of detection of this isotropic GW background by correlating signals of a pair of ‘advanced’ LIGO observatories. #### Oct 13, 2009 0910.1780 (/preprints) 2009-10-13, 08:13 ## [0910.1929] The twin paradox and Mach's principle Authors: Herbert I.M. Lichtenegger, Lorenzo Iorio Date: 10 Oct 2009 Abstract: The problem of absolute motion in the context of the twin paradox is discussed. It is shown that the various versions of the clock paradox feature some aspects which Mach might have been appreciated. However, the ultimate cause of the behavior of the clocks must be attributed to the autonomous status of spacetime, thereby proving the relational program advocated by Mach as impracticable. #### Oct 13, 2009 0910.1929 (/preprints) 2009-10-13, 08:12 ## [0910.1587] Triplets of supermassive black holes: Astrophysics, Gravitational Waves and Detection Authors: Pau Amaro-Seoane, Alberto Sesana, Loren Hoffman, Matthew Benacquista, Christoph Eichhorn, Junichiro Makino, Rainer Spurzem Date: 8 Oct 2009 Abstract: Supermassive black holes (SMBHs) found in the centers of many galaxies have been recognized to play a fundamental active role in the cosmological structure formation process. In hierarchical formation scenarios, SMBHs are expected to form binaries following the merger of their host galaxies. If these binaries do not coalesce before the merger with a third galaxy, the formation of a black hole triple system is possible. Numerical simulations of the dynamics of triples within galaxy cores exhibit phases of very high eccentricity (as high as $e \sim 0.99$). During these phases, intense bursts of gravitational radiation can be emitted at orbital periapsis. This produces a gravitational wave signal at frequencies substantially higher than the orbital frequency. The likelihood of detection of these bursts with pulsar timing and the Laser Interferometer Space Antenna ({\it LISA}) is estimated using several population models of SMBHs with masses $\gtrsim 10ˆ7 {\rm M_\odot}$. Assuming a fraction of binaries $\ge 0.1$ in triple system, we find that few to few dozens of these bursts will produce residuals $>1$ ns, within the sensitivity range of forthcoming pulsar timing arrays (PTAs). However, most of such bursts will be washed out in the underlying confusion noise produced by all the other 'standard' SMBH binaries emitting in the same frequency window. A detailed data analysis study would be required to assess resolvability of such sources. Implementing a basic resolvability criterion, we find that the chance of catching a resolvable burst at a one nanosecond precision level is 2-50%, depending on the adopted SMBH evolution model. On the other hand, the probability of detecting bursts produced by massive binaries (masses $\gtrsim 10ˆ7\msun$) with {\it LISA} is negligible. #### Oct 13, 2009 0910.1587 (/preprints) 2009-10-13, 08:12 ## [0910.1008] Canonical formulation of gravitating spinning objects at 3.5PN Authors: Jan Steinhoff, Han Wang Date: 6 Oct 2009 Abstract: The third-and-a-half post-Newtonian (PN) level is tackled by extending the canonical formalism of Arnowitt, Deser, and Misner to spinning objects. This extension is constructed order by order in the PN setting by utilizing the global Poincaré invariance as the important consistency condition. The formalism is valid to linear order in the single spin variables. Agreement with a recent action approach is found. A general formula for the interaction Hamiltonian between matter and transverse-traceless part of the metric at 3.5PN is derived. The wave equation resulting from this Hamiltonian is considered in the case of the constructed formalism for spinning objects. Agreement with the Einstein equations is found in this case. The energy flux at the spin-orbit level is computed. #### Oct 07, 2009 0910.1008 (/preprints) 2009-10-07, 10:01 ## [0910.0254] Detection of IMBHs with ground-based gravitational wave observatories: A biography of a binary of black holes, from birth to death Authors: Pau Amaro-Seoane, Lucia Santamaria Date: 1 Oct 2009 Abstract: Even though the existence of intermediate-mass black holes has not yet been corroborated observationally, these objects are of high interest for astrophysics. Our understanding of formation and evolution of supermassive black holes (SMBHs), as well as galaxy evolution modeling and cosmography would dramatically change if an IMBH was observed. The prospect of detection and, possibly, observation and characterization of an IMBH has good chances in lower-frequency gravitational-wave (GW) astrophysics with ground-based detectors such as LIGO, Virgo and the future Einstein Telescope (ET). We present an analysis of the signal of a system of a binary of IMBHs based on a waveform model obtained with numerical relativity simulations coupled with post-Newtonian calculations at the highest available order so as to extend the waveform to lower frequencies. We find that initial LIGO and Virgo are in the position of detecting IMBHs with a signal-to-noise ratio (SNR) of $\sim 10$ for systems with total mass between 100 and $500 M_{\odot}$ situated at a distance of 100 Mpc. Nevertheless, the event rate is too low and the possibility that these signals are mistaken with a glitch is, unfortunately, non-negligible. When going to second- and third-generation detectors, such as Advanced LIGO or the proposed ET, the event rate becomes much more promising (tens per year for the first and thousands per year for the latter) and the SNR at 100 Mpc is as high as 100 -- 1000 and 1000 -- $10ˆ{5}$ respectively. The prospects for IMBH detection and characterization with ground-based GW observatories would not only provide us with a robust test of general relativity, but would also corroborate the existence of these systems. Such detections would be a probe to the stellar environments of IMBHs and their formation. #### Oct 06, 2009 0910.0254 (/preprints) 2009-10-06, 12:48 ## [0910.0758] Advanced drag-free concepts for future space-based interferometers: acceleration noise performance Authors: D. Gerardi, G. Allen, J. W. Conklin, K-X. Sun, D. DeBra, S. Buchman, P. Gath, W. Fichter, R. L. Byer, U. Johann Date: 5 Oct 2009 Abstract: Future drag-free missions for space-based experiments in gravitational physics require a Gravitational Reference Sensor with extremely demanding sensing and disturbance reduction requirements. A configuration with two cubical sensors is the current baseline for the Laser Interferometer Space Antenna (LISA) and has reached a high level of maturity. Nevertheless, several promising concepts have been proposed with potential applications beyond LISA and are currently investigated at HEPL, Stanford, and EADS Astrium, Germany. The general motivation is to exploit the possibility of achieving improved disturbance reduction, and ultimately understand how low acceleration noise can be pushed with a realistic design for future mission. In this paper, we discuss disturbance reduction requirements for LISA and beyond, describe four different payload concepts, compare expected strain sensitivities in the 'low-frequency' region of the frequency spectrum, dominated by acceleration noise, and ultimately discuss advantages and disadvantages of each of those concepts in achieving disturbance reduction for space-based detectors beyond LISA. #### Oct 06, 2009 0910.0758 (/preprints) 2009-10-06, 12:47 ## [0910.0373] An Overview of LISA Data Analysis Algorithms Authors: Edward K. Porter Date: 2 Oct 2009 Abstract: The development of search algorithms for gravitational wave sources in the LISA data stream is currently a very active area of research. It has become clear that not only does difficulty lie in searching for the individual sources, but in the case of galactic binaries, evaluating the fidelity of resolved sources also turns out to be a major challenge in itself. In this article we review the current status of developed algorithms for galactic binary, non-spinning supermassive black hole binary and extreme mass ratio inspiral sources. While covering the vast majority of algorithms, we will highlight those that represent the state of the art in terms of speed and accuracy. #### Oct 06, 2009 0910.0373 (/preprints) 2009-10-06, 12:47 ## [0910.0380] Data Analysis Challenges for the Einstein Telescope Authors: Leone Bosi, Edward K. Porter Date: 2 Oct 2009 Abstract: The Einstein Telescope is a proposed third generation gravitational wave detector that will operate in the region of 1 Hz to a few kHz. As well as the inspiral of compact binaries composed of neutron stars or black holes, the lower frequency cut-off of the detector will open the window to a number of new sources. These will include the end stage of inspirals, plus merger and ringdown of intermediate mass black holes, where the masses of the component bodies are on the order of a few hundred solar masses. There is also the possibility of observing intermediate mass ratio inspirals, where a stellar mass compact object inspirals into a black hole which is a few hundred to a few thousand times more massive. In this article, we investigate some of the data analysis challenges for the Einstein Telescope such as the effects of increased source number, the need for more accurate waveform models and the some of the computational issues that a data analysis strategy might face. #### Oct 06, 2009 0910.0380 (/preprints) 2009-10-06, 12:47 ## [0910.0002] Black hole mergers: the first light Authors: Elena M. Rossi, G. Lodato, P. J. Armitage, J. E. Pringle, A. R. King Date: 1 Oct 2009 Abstract: The coalescence of supermassive black hole binaries occurs via the emission of gravitational waves, that can impart a substantial recoil to the merged black hole. We consider the energy dissipation, that results if the recoiling black hole is surrounded by a thin circumbinary disc. Our results differ significantly from those of previous investigations. We show analytically that the dominant source of energy is often potential energy, released as gas in the outer disc attempts to circularize at smaller radii. Thus, dimensional estimates, that include only the kinetic energy gained by the disc gas, underestimate the real energy loss. This underestimate can exceed an order of magnitude, if the recoil is directed close to the disc plane. We use three dimensional Smooth Particle Hydrodynamics (SPH) simulations and two dimensional finite difference simulations to verify our analytic estimates. We also compute the bolometric light curve, which is found to vary strongly depending upon the kick angle. A prompt emission signature due to this mechanism may be observable for low mass (10ˆ6 Solar mass) black holes whose recoil velocities exceed about 1000 km/s. Emission at earlier times can mainly result from the response of the disc to the loss of mass, as the black holes merge. We derive analytically the condition for this to happen. #### Oct 02, 2009 0910.0002 (/preprints) 2009-10-02, 05:30 ## [0910.0207] Post-Newtonian and Numerical Calculations of the Gravitational Self-Force for Circular Orbits in the Schwarzschild Geometry Authors: Luc Blanchet, Steven Detweiler, Alexandre Le Tiec, Bernard F. Whiting Date: 1 Oct 2009 Abstract: The problem of a compact binary system whose components move on circular orbits is addressed using two different approximation techniques in general relativity. The post-Newtonian (PN) approximation involves an expansion in powers of v/c<<1, and is most appropriate for small orbital velocities v. The perturbative self-force (SF) analysis requires an extreme mass ratio m1/m2<<1 for the components of the binary. A particular coordinate-invariant observable is determined as a function of the orbital frequency of the system using these two different approximations. The post-Newtonian calculation is pushed up to the third post-Newtonian (3PN) order. It involves the metric generated by two point particles and evaluated at the location of one of the particles. We regularize the divergent self-field of the particle by means of dimensional regularization. We show that the poles proportional to 1/(d-3) appearing in dimensional regularization at the 3PN order cancel out from the final gauge invariant observable. The 3PN analytical result, through first order in the mass ratio, and the numerical SF calculation are found to agree well. The consistency of this cross cultural comparison confirms the soundness of both approximations in describing compact binary systems. In particular, it provides an independent test of the very different regularization procedures invoked in the two approximation schemes. #### Oct 02, 2009 0910.0207 (/preprints) 2009-10-02, 05:26
proofpile-shard-0030-379
{ "provenance": "003.jsonl.gz:380" }
## public vs. published interfaces Gilad Bracha is about to set in motion a JSR that may -- in a glacially unstoppable JCP fashion -- eventually address one of my pet peeves with Java: lack of distinction between public and published interfaces. The latter terms are due to Martin Fowler [PDF, 68K]: One of the growing trends in software design is separating interface from implementation. The principle is about separating modules into public and private parts so that you can change the private part without coordinating with other modules. However, there is a further distinction -- the one between public and published interfaces. ... The two cases are quite different, yet there's nothing in the Java language to tell the difference -- a gap that's also present in a few other languages. Yet there's something to be said for the public-published distinction being more important than the more common public-private distinction. Or, in the words of Erich Gamma: A key challenge in framework development is how to preserve stability over time. The more miles a framework gets the better you understand how you should have built it in the first place. Therefore you would like to tweak and improve it. However, since your framework is heavily used you are highly constrained in what you can change. At this point it is crucial to have well defined APIs and to make it clear to the clients what is published API and what internal code is. For published APIs you should commit to stability and for internal code you have the freedom to change it. To fully appreciate the kind of pain that this JSR is intended to ease, consider how developers deal with this problem today: • The Eclipse model, as described by Erich Gamma: A good example of how I like to see reuse at work is Eclipse. It's built of components we call plug-ins. A plug-in bundles your code and there is a separate manifest where you define which other plug-ins you extend and which points of extension your plug-in offers. Plug-ins provide reusable code following explicit conventions to separate API from internal code. The Eclipse component model is simple and consistent too. It has this kernel characteristic. Eclipse has a small kernel, and everything is done the same way via extension points. Some other projects have adopted similar conventions. For example, France Telecom is known to maintain the distinction between lib and api packages: J2SE implementations consist of two parts: 1. Classes and interfaces implementing the published J2SE APIs. 2. Internal implementation artifacts that aren't meant to be exposed to users of the J2SE libary. Sun generates Javadoc only for the "official" classes. Implementation artifacts are undocumented are not supposed to be relied on. Both of these approach amount to the same thing: convention. Nothing stops you from using the non-published public interfaces. It will be interesting to see what will come out of Bracha's JSR. ## Comment viewing options ### Modules vs Packages Doesn't this come down to Java not having a modular interface system? In Dylan (IIRC), and I presume other languages, you expose your interfaces separately and possibly independently depending on what you want exposed in a separate module file. Does it come down to an inability to expose multiple interfaces dependant on context? ### The Martin Fowler link is The Martin Fowler link is wrong. Fixed, thanks. ### C# has internal public, private and internal Internal is accessible only to code within the same "assembly", which I guess is much like a module, with the proviso that one "assembly" can nominate its "friends", which then also have access to classes and methods declared "internal". ### My Java is quite rusty My Java is quite rusty and I don't know much about C#, but I think the default access in Java (when no access modifier is specified) is similar to "internal" (access granted to code from same package). However, there's no possibility to declare other packages as friends. Can one selectively declare which internal classes/methods "assembly friends" in C# have access to? If they get access to everything "internal" this may not be desireable. I understand the proposal as certain packages being able to access more of a package than "the general public" but still not having the same access rights to everything "internal" as the components from the original package itself. ### .NET assemblies and friends .NET assemblies are arbitrary collections of classes (more of a module than a package), and the classes they contain aren't restricted as to namespace (a namespace is orthogonal to an assembly). Assemblies are also "conceptual" entities, in that an assembly may comprise more than one physical file, allowing a portion of an assembly to be re-deployed without disturbing the remainder. This means that .NET assemblies can be fairly granular and self-contained -- "public" interfaces on an assembly pretty much are "published", while "internal" interfaces are "public-to-me-and-my-friends". I'm not sure I agree with the idea that a module should simply expose two levels of "public" interfaces -- in the case of a componentised library, it makes a sort of sense, where you would want one "library-level" API for your users, and a "side-door" API to cooperate with other components. But in the general case, there may be more than two levels of access you wish to grant, and the problem becomes rather complicated. Using the "friend" mechanism, .NET 2.0 allows multiple levels and vectors of access. A single assembly may be divided into several: a central "hub" assembly, containing core code, and multiple "gateway" assemblies containing cooperative code. The "hub" allows internal access to all the "gateways", and the gateways in turn allow internal access to their respective cooperative peers. Access is controlled at a fairly fine grain, and of course any of these assemblies can expose truly "public" interfaces as they see fit. The biggest limitation here is that "friends" must be declared by name, in order to preserve security. That means that you must know ahead of time which other assemblies you're willing to trust, which may not be possible. ### What's the Difference? Could someone explain what the difference in this case is between a 'public' and 'published' interface? And what's the difference between a 'lib' and an 'api'? ### A public API can be called A public API can be called by other classes in the same application. A published API has been made available outside your codebase. You can change a public API more easily than change a published API. If you change a public API you have to change other code in your application. Refactoring tools can do this for you automatically. If you change a published API you have to coordinate the change with all the users of your API and you lose all your powerful tool support. In this case, the 'api' package contains code that is published to third parties and the 'lib' package contains code that is used in the implementation of the 'api' package but that is not guaranteed to be stable between releases. ### Tool support for published API migration If you change a published API you have to coordinate the change with all the users of your API and you lose all your powerful tool support. There are refactoring tools which allow you to package up automated migration scripts which can be run by external api users. They just aren't in common use yet, as they tend to require all of your users be using the same refactoring tools (and the tools themselves are fairly new). I wouldn't be be surprised if we see a standardization effort in the next couple of years. There's too much value in allowing easy API migrations. ### Re: Tool support for published API migration Sounds interesting. Do you know the names of any off the top of your head? ### Pretty much any Java IDE... has at least a simple implementation of it. IntelliJ IDEA has long had a limited "Migrate" utility, that handles class move and renames. Eclipse is adding the a fuller utility that also handles method moves, renames, and signature changes. I believe JDeveloper is already shipping with similar. NetBeans is now shipping with the JackPot project, which provides a fairly full DSL for both migrations and for creating code audits/quickfixes. ### That's true, but because That's true, but because there is no standard for this it doesn't begin to address the problem. The distinction between a public and published interface is that you have no control over the users of your API, and so cannot force them to use any particular IDE. ### Modula-3 An internative to having published/public/protected/internal/whatever fixed scheme to be verified is the one used in Modula-3 of having an arbitrary number of "partial revelations" (I hope I remember correctly) of the interface provided by a module. ### Can't you do that just by Can't you do that just by creating a new module that imports the whole old module interface but only exports part of it? ### Not just by convention At least in Java there are various means of preventing access to public but unpublished APIs. Gilad Bracha's upcoming JSR and the sister JSR 277 (module system) should make it easy and standardized but it is possible without them. Eclipse uses the OSGi framework for this kind of thing, which "wires" plug-ins (OSGi "bundles") together using special ClassLoaders. These can ensure that only the right packages are being used in a given dependency. NetBeans, in a somewhat similar fashion, permits package restrictions. By default no packages from a module JAR are available for use by other modules (i.e. cannot be linked against). As the author of a module you may enumerate certain packages to be "public" and usable by other modules. (Or you may list packages to export to only certain named "friend" modules.) As a back door, a module may request to use any package from another module - if it "signs a waiver" by declaring a dependency on the exact implementation version of the provider module, thus making the fragile nature of this dep explicit. All of this is enforced at compile time (through the Ant build infrastructure) and at runtime (through ClassLoaders). I believe the J2SE partially enforces the set of published APIs by restricting access to internal packages via SecurityManager, but I don't know much about this. There is a third basic technique for differentiating public from published classes/packages that I know about: validation. In the Java world, the "100% Java" testing tool is probably one of the earliest examples. More generally, Lattix lets you define hierarchical "rules" about which components in an app can access which other components (or external components such as the JRE or libraries), as part of your application modelling; rule violations can be browsed interactively in the GUI tool, or reported as warnings or errors during a build. There are probably many other examples in this area. Of course this style presumes that the user of the API cares enough about the public/published distinction to explicitly run such a tool. With regards to upgrading clients of published APIs after an incompatible change (or deprecation): this is indeed a relatively young area for tools, especially among open-source choices. I am following the Jackpot project which may be successful in providing this kind of functionality for Java apps. It uses javac's native syntax tree and semantic model to represent a body of code - currently undergoing standardization. Jackpot can then run queries or transformations on the model, which can be written in a simple DSL for the common cases or in Java for more sophisticated cases. ### Re: Not just by convention jglick wrote: ... Bracha's upcoming JSR and the sister JSR 277 (module system) should make it easy and standardized... Dalibor's take on this is entertaining as usual: Last time someone tried to get people excited around one of those ueber-exciting JSRs that will totally reshape the future of Java (deployment), was JSR 277. That's the one where OSGi meets Maven, and they have a love child that is like a CPAN for Java, only with small JARs. Sorta. Kinda. It's hard to tell since the JSR 277 has not produced anything since its inception last summer, besides enthusiastic exclamations of support when it was announced. ### The man has a point. JSR 277 The man has a point. JSR 277 is one of the most important JSRs in quite a while and they haven't made any effort to communicate with the users. ### Standardization is very important at this level Gilad Bracha's upcoming JSR and the sister JSR 277 (module system) should make it easy and standardized but it is possible without them. True, but standardization would provide many benefits, particularly in the area of tooling. Right now, just about every Java IDE and build system has some conception of project structure, modularization, dependency rules, and (sometimes) versioning. Unfortunately, all of these conceptions are independent, extra-linguistic, and painful to map between. Putting these concepts into Java would make it simple to move between tools. It would also allow the tools to become more powerful, including analysis and critique of modularization, and automatic refactoring to improve modularization. ### Two thoughts First is that package-protected is the likely mechanism to allow this behavior if you really need it. If its your code - then its in your package and if you didn't make it public then nobody outside your package can call it. My second thought is that this is yet another effort to distrust the programmer and will result in a bunch of corner cases that don't quite work - much like Java's crummy type system doesn't quite work either. Fact is, you don't need the compiler to enforce this stuff really - just put in a comment/naming convention to make your intent known and then break stuff when people don't follow the rules. They'll learn. I can't count the number of times I've discovered some method made protected that probably should have been declared public. Compiler enforced access control is a waste of resources. ### The purpose of the extension The purpose of the extension is to avoid having to pack everything that "needs to interoperate" on a level below public access into a single, huge package. I can't count the number of times I've discovered some method made protected that probably should have been declared public. Depends on who wrote the code. I usually think before I make something public/protected/private. When people change my modifiers to access a protected method because they "need it", they're usually just taking the wrong approach. I see no arguments against compiler-checked access modifiers other than that it makes it a bit more difficult to write some messy code in a hurry that "I'll clean up one day". It's not the compiler's fault if programmers make the wrong choices. And I don't see it as a kind of "distrust". I see it as a way of being able to make certain guarantees about how the objects behave and enforce the correct way to use them. and then break stuff when people don't follow the rules. They'll learn. If I rewrite my internal interface, and somebody broke the rules and uses it somewhere else extensively, then it's me who has to clean up his mess so I get the program to compile to test my new code. If somebody messes with my private state at some point without my knowledge which causes something to crash later in my code then it's again me who'll have to hunt down the source of the error. ### I can't count the number of I can't count the number of times I've discovered some method made protected that probably should have been declared public. Me, too. Students seem to be taught that if they don't know if a method should be public, then make it private (i.e., be defensive). This seems like a good idea, but when the protection level is enforced by the compiler it can be a real PITA. The problem is that violating these mechanisms causes compiler errors rather than warnings. If I could reuse a whole library except for one method that is private, then I would rather see a warning and take my chances than have to write/maintain my own version of the library. You could always add another "enforce" modifier for cases where you really want a hard guarantee that a private method is private (i.e., where you are using the mechanism to enforce security). If somebody messes with my private state at some point without my knowledge which causes something to crash later in my code then it's again me who'll have to hunt down the source of the error. If you are talking about code reuse within a team, then yes, this can happen. But a warning on compile would show the location of the violation just as effectively as an error, wouldn't it? If however, you were talking about reuse externally — publishing a library for anyone to reuse — then you don't even have to know if someone violates your contract. They might get an error, and they might even (incorrectly) blame it on your code, but you don't have to know nor care about it. ### Onus is on the developer using the code If I rewrite my internal interface, and somebody broke the rules and uses it somewhere else extensively, then it's me who has to clean up his mess so I get the program to compile to test my new code. Sounds like you have a dependency problem in your workflow. I don't kow where you work but in my experience, the one who writes to your interface is likely the one who owns the program and if he wants to ship with your updates, its on him to fix the program. All you have to honor is your interface. Stepping into the other guy's shoes - if I need to ship something and the only way I can make your code work for me is to break encapsulation - then I'll do it happily and deal with the consequences. After all, if I have to bust your interface to accomplish my task, maybe your interface wasn't adequate. In my experience, these enforced limits just make existing code less and less useful for people who want to do new things you didn't anticipate. ### My opinion was based on My opinion was based on what I'm working on, (lower-level) APIs in a huge program that are used by different programmers in other modules, but within the same app so the whole thing wouldn't start up until everything is fixed. For developing stand-alone libraries, that's a somewhat different matter. Although if you sell your library for some \$1000 and keep breaking your clients' code and just tell them "told you so" - they might go looking for a different vendor, even if it's their fault. So this was with that kind of application development in mind. In other cases, it may not be essential. At home I'm writing Lisp. CLOS has no access modifiers and it's fine with me. I also enjoy dynamic typing. Just because I am more productive that way doesn't mean that you can live without access control/static typing in large applications with hundreds of MByte of sourcecode. I don't think it makes code less useful - unless wrong access is set which indeed is annoying. I've run into this as well. Had to copy & paste an entire source file from a 3rd party library because of a totally unnecessary "private" modifier. However, I still blame it on the developers, not the language ... Maybe the interface isn't adequate, maybe you're using the wrong library, ... whatever, it's not the language's fault. ### Re: Two thoughts tblanchard wrote: Fact is, you don't need the compiler to enforce this stuff really - just put in a comment/naming convention to make your intent known and then break stuff when people don't follow the rules. You sound like Eric Naggum: if you live among thieves and bums who steal and rob you, by all means go for the compiler who smacks them in the face. if you live among nice people who you would _want_ to worry about a red-hot plate they can see inside a kitchen window or a broken window they need to look inside your house before reporting as a crime (serious bug), you wouldn't want the kind of armor-plating that you might want in the 'hood. that doesn't mean the _need_ for privacy is any different. it's just that in C++ and the like, you don't trust _anybody_, and in CLOS you basically trust everybody. the practical result is that thieves and bums use C++ and nice people use CLOS. :) I'm almost convinced by his argument. ### Be wary These arguments by emotional anologies are usually misleading and unhelpful. See how they intend to play on your emotions by talking about "thieves and bums" vs "nice people"? Don't fall for it. Personally, I like the simplicity of public vs private. I think their value is seriously undermined when they are just a "convention". I don't think that I'm a criminal or surrounded by criminals for having this opinion. ### What starts out simple... ...quickly bogs down in complex scenarios. Nothing wrong with public and private until (a) someone really needs access to private definitions that the original designer could not foresee - problems with the Open-Closed principle; or (b) the software grows to a level where a private/public dichotomy is no longer sufficient for all the abstraction boundaries that are required (which is what this thread is about). From my perspective, language designers should recognize that the visibility properties are "meta" information. As such, coming up with a way to manage meta-information holds the key to coming up with a long-term solution (as opposed to a band-aid). Sure it makes it simple to intertwine metadata with the implementation details, but it also presents problems of flexibility. The problem with the visibility of private, protected, public, package (or whatever) is that it does not take into account that visibility of methods is very much a function of the layer of abstraction. What might be public at one level, should be private at another. And that point where the abstraction takes over is not always on a nice us-vs-them boundary. ### I am wary I have been treated like a criminal by other developers who locked me out of some useful functionality. In the end I end up picking their locks to make things work. My choice of course but I've got to eat too. I have no trouble with the "idea" of public/private, but why must the compiler "enforce" it by refusing to build the program when I disagree with the author? Its only his opinion vs mine. Intent noted, now please stand aside while I ship this thing. I think we wouldn't be having this discussion if it were possible to build a program with a flag like -dont-enforce-access-controls so the program would treat all violations of encapsulation as warnings. If you want to build strict - hey - god bless. If you want to build sloppy - hey - good luck. This is why I like the ObjectiveC compiler. Its very polite and quite helpful with warnings - but it will still build your program if it can. You ignore warnings at your own risk. ### Software workaround for a project management problem? I have no trouble with the "idea" of public/private, but why must the compiler "enforce" it by refusing to build the program when I disagree with the author? Its only his opinion vs mine. Intent noted, now please stand aside while I ship this thing. If your code has been overlooked as a customer of that feature, which is now getting in the way of shipping your application, the right solution is to go up the chain and have you added as a customer. Bypassing the compiler can have consequences like: * The other guy can at some point rename/rewrite/delete his rightfully private method. * The other guy could assume that a certain condition does not exist, looking (normally) only at his own code for calls to the private method. Your calling his private method may violate that condition, and now he has a bug to chase because of you. Even without static checks, if the method has comments like "don't call me unless you're in the ABC group", then the right thing is not to call it, but to find a mediator/manager/architect/etc. to discuss it. ### Zero, one, infinity rule This seems to be about change control boundaries. You start at level zero with a language that has no inner boundaries. Then you say: we need one change control boundary. You add the public/private distinction. That is OK for a while, then you start thinking: I wish we had two levels of change control. Private, public, published. How plausible is it that two will be enough? Surely this is the point to move to a more general structure. ### More General Structures For sure! I believe the creators of Java have said repeatedly that they made it pretty darned simple on purpose, explicitly avoiding things which in the long run might be more powerful, but in the short term would prevent people from learning the language. (single/multiple inheritance, public/private/friend/package, etc.) It would be neat for there to be "a more general structure" for just about every distinction made in languages? How does one make it something which is maintainable and not have it just lead to spaghetti? ### Rule is a partial order I understand the "zero, one, infinity rule" as proposing a partial order on the set {0,1,2,∞}. 2 is worse than the others. Maybe 0) a distinction is unnecessary, or maybe 1) a binary distinction is appropriate, or perhaps ∞) a more general structure is best. The zero, one, infinity rule is silent on this. The underlying idea is that a three way split is rare. If there really are three options, (left, right, straight on?) then the rule misleads, it is much better to have left and right rather than making do with left and filling your code with left,left,left as an idiom for right :-) The typical situation is that a single binary distinction proves unsatisfactory because a threshold is being applied to a continuous quantity and it turns out that a finer quantisation is required. Adding a second threshold is quite attractive, because it is a small increment on complexity. On the other hand, the same dynamic that is creating the need for finer quantisation is likely to still be active. A third threshold will be added eventually. Meanwhile, although the language considered in isolation is simple, minimising the quantisation error when you only have three levels to play with is an on going complication. The underlying issue is how much money will it cost to propagate a change to a function. If it has been kept private, it should be cheap. If it has been widely published it will be expensive. Perhaps one needs an intermediate level: public, but one is thresholding a continuous quantity (cost in time and money) with a wide dynamic range. Using two thresholds to implement a 3 level quantisation looks like the kind of comprise that will wear badly. ### I've always liked the idea I've always liked the idea behind Eiffel's access controls, where you provide a list of types that have access to each attribute. An empty list would be equivalent to private, the this type equivalent to protected, and the any type equivalent to public. I've not however written anything if Eiffel so I don't know how well this works in practice. I've also not heard of any other languages employing something similiar... anyone out there that knows some or can speak about the utility of this feature?
proofpile-shard-0030-380
{ "provenance": "003.jsonl.gz:381" }
# FR2 (4+4+4-12 points) (a) Let X1,Xy, , Xio be a random sample from N(uiti) and Yİ, ½,.:.,Y15 be a randonn ###### Question: FR2 (4+4+4-12 points) (a) Let X1,Xy, , Xio be a random sample from N(uiti) and Yİ, ½,.:.,Y15 be a randonn sample from N(μ2, σ ), where all parameters are unknown. Sup- Σ 'i (X, X), 90. ΣΙ 1 (Y; Y)2 100. obtain a 99% confidence interval for σ /σ having the form [b, oo) for some number b (No derivation needed) (b) 60 random points are selected from the unit interval (z:0 #### Similar Solved Questions ##### Q19. Toronto authority claims that unemplyment rate in Torvnto 457. To verily the chit, random sample 0f 120 Toronto eligihle residents Was celected and o them er [uund unemployed. 99" conlidence interal (CH) estimate using VO_P) (D L1 could be calculaled verily eLuimn The follow ing iJarc corrce Statement (): Jtiv invalid "J" €I etintates Ict Il: .u5 " 0.045 4 uF0.01 0.045 can be used for culculating %9". € estimates 99" (I atimades dote not include # 0.0-I5 Q19. Toronto authority claims that unemplyment rate in Torvnto 457. To verily the chit, random sample 0f 120 Toronto eligihle residents Was celected and o them er [uund unemployed. 99" conlidence interal (CH) estimate using VO_P) (D L1 could be calculaled verily eLuimn The follow ing iJarc corr... ##### Please help now 8) 0.6222 grams unknown gas, X, has a pressure of 778.0 mmHg at... please help now 8) 0.6222 grams unknown gas, X, has a pressure of 778.0 mmHg at 25.0°C and occupies a volume of 0.1487 liters. What is the molar mass of X? 11 points... ##### Consider the vectors Uj (1,3,4,-2,5), v2 = (1,0,7,-4,8) and v3 (2,3,11,-8,1) in R5. Let W be the subspace consisting of the vectors in R5 that are orthogonal (perpendicular) to all the vectors Uj , Uz and 0; . Find a basis for W and find the dimension of W Show all your work: Consider the vectors Uj (1,3,4,-2,5), v2 = (1,0,7,-4,8) and v3 (2,3,11,-8,1) in R5. Let W be the subspace consisting of the vectors in R5 that are orthogonal (perpendicular) to all the vectors Uj , Uz and 0; . Find a basis for W and find the dimension of W Show all your work:... ##### Q1 Why did you choose becoming an accountant as your career? Discuss both occupations (financial broker... Q1 Why did you choose becoming an accountant as your career? Discuss both occupations (financial broker and Accountant) viability, desirability, the changes occurring the label market from automation and globalisation. Discuss three relevant factors to your personal circumstances to choose an accoun... ##### In monoclonal antibody production, antibody-forming cells are fused with cultivated tumor cells to form antibody-pr... In monoclonal antibody production, antibody-forming cells are fused with cultivated tumor cells to form antibody-producing- hepatomas hybridroses hybridomas "Sr injection is used in nuclear medicine for the relief of When a Mo generator is eluted theTe is in the form of resulting from metastases... ##### Question 4 Crane Company prepares monthly cash budgets. Relevant data from operating budgets for 2020 are... Question 4 Crane Company prepares monthly cash budgets. Relevant data from operating budgets for 2020 are as follows. Sales Direct materials purchases Direct labor Manufacturing overhead Selling and administrative expenses January $446,400 148,800 111,600 86,800 97,960 February$496,000 155,000 124,... ##### 3) The A string ofa guitar is pulled 1.5 cm above its resting flat position, and then released:It vibrates in damped harmonic motion with a frequency of 160 cycles per second. After two seconds, the amplitude ofthe vibration is measured at 0.2 cm. Find the damping constant; and then find the equation that describes the position of the guitar string,assuming t = 0 is the moment the string was released: 3) The A string ofa guitar is pulled 1.5 cm above its resting flat position, and then released: It vibrates in damped harmonic motion with a frequency of 160 cycles per second. After two seconds, the amplitude ofthe vibration is measured at 0.2 cm. Find the damping constant; and then find the equati... ##### A thick spherical shell (inner radius $a$, outer radius $b$ ) is made of dielectric material with a "frozen-in" polarization $\mathbf{P}(\mathbf{r})=\frac{k}{r} \hat{\mathbf{r}},$ where $k$ is a constant and $r$ is the distance from the center (Fig. 4.18). (There is no free charge in the problem.) Find the electric field in all three regions by two different methods: (a) Locate all the bound charge, and use Gauss's law (Eq. 2.13 ) to calculate the field it produces. (b) Use Eq. A thick spherical shell (inner radius $a$, outer radius $b$ ) is made of dielectric material with a "frozen-in" polarization $\mathbf{P}(\mathbf{r})=\frac{k}{r} \hat{\mathbf{r}},$ where $k$ is a constant and $r$ is the distance from the center (Fig. 4.18). (There is no free charge in the... ##### Suppose you can buy a 1 square meter solar panel that willgenerate 0.5 kilowatts of power. If your house uses 1800 kilowatthours of energy in an average month, how many solar panels shouldyou purchase to generate all the energy you need for a month?Assume the sun shines an average of 360 hours amonth. , first find thenumber of kilowatt hours one panel will generate in a month.Question 13 options:400 panels1440 panels100 panels10 panelsNot enough information Suppose you can buy a 1 square meter solar panel that will generate 0.5 kilowatts of power. If your house uses 1800 kilowatt hours of energy in an average month, how many solar panels should you purchase to generate all the energy you need for a month? Assume the sun shines an average of 360 hours a... ##### The piece of plastic is originally rectangular. Suppose that a-250 mm and 6 390 mm 5... The piece of plastic is originally rectangular. Suppose that a-250 mm and 6 390 mm 5 mm 2 mm 4 mm 12mm 3 mm... ##### In the city, 22% of families own & dishwasherFind the probability that , of [S families chosen at random from the city, between 4 and 6 inclusive own 4 dishwasher. [3]A random sample of 145 families from the city are chosen Use a suitable approximation t0 find the probability that more than 26 families own & dishwasher [6] In the city, 22% of families own & dishwasher Find the probability that , of [S families chosen at random from the city, between 4 and 6 inclusive own 4 dishwasher. [3] A random sample of 145 families from the city are chosen Use a suitable approximation t0 find the probability that more than 2... ##### What is an auditor's responsibility for supplementary information that is outside the basic financial statements but... What is an auditor's responsibility for supplementary information that is outside the basic financial statements but required by the FASB? Group of answer choices The auditor has no responsibility for required supplementary information as long as it is outside the basic financial statements. The... ##### Of helping people with 3. This exercise is designed to help you understand the importance of... of helping people with 3. This exercise is designed to help you understand the importance of hele illnesses determine their priorities and maintain their choices. the you in your teleser in the large box below, write all of the things that are important to you activities, events, foods). b. In the m... ##### Explain what is represented in the following image, and how it relates to BOTH the Calvin-Benson... Explain what is represented in the following image, and how it relates to BOTH the Calvin-Benson cycle photosynthesis (sometimes called the reductive pentose phosphate pathway) AND the oxidative pentose phosphate pathway in plastids (3 pts) 2. What happens during the three stages of the Calvin-Bens... ##### S10-13 (similar to) Assigned Media Question Help The financial statements of Ridgeview Employment Services, Inc., reported... S10-13 (similar to) Assigned Media Question Help The financial statements of Ridgeview Employment Services, Inc., reported the following accounts: (Click the icon to view the accounts.) Prepare the stockholders' equity section of Ridgeview's balance sheet. Net income has already been closed ... ##### What ate the conrect rengent (aboveand below reaction arrowl and clcctronh substitution reactions (lone pair clectron: Are not shownkdcctrophilic somaticCH,CH,CI AICI]HsCH;cRicl;H2SO4 (cat) "HNO;N1) CH CH-COCI HyCHzC ce-0 FCC1,o Cl Cl Cl AICla AICIa What ate the conrect rengent (aboveand below reaction arrowl and clcctronh substitution reactions (lone pair clectron: Are not shownk dcctrophilic somatic CH,CH,CI AICI] HsCH;c Ricl; H2SO4 (cat) "HNO; N 1) CH CH-COCI HyCHzC ce-0 FCC1,o Cl Cl Cl AICla AICIa... ##### A compound containing only $mathrm{C}, mathrm{H}$, and $mathrm{Cl}$ was examined in a mass spectrometer. The highest mass peak seen corresponds to an ion mass of 52 amu. The most abundant mass peak seen corresponds to an ion mass of 50 amu and is about three times as intense as the peak at 52 amu. Deduce a reasonable molecular formula for the compound and explain the positions and intensities of the mass peaks mentioned. (Hint: Chlorine is the only element that has isotopes in comparable abundan A compound containing only $mathrm{C}, mathrm{H}$, and $mathrm{Cl}$ was examined in a mass spectrometer. The highest mass peak seen corresponds to an ion mass of 52 amu. The most abundant mass peak seen corresponds to an ion mass of 50 amu and is about three times as intense as the peak at 52 amu. D... ##### 2. (12 pts-) An box with a square base volume of 1500 cm? is to be constructed by taking a piece of cardboard 20 cm by 40 cm, cutting squares of side length X cm from each corner, and folding up the sides_ Use a graph to show that this can be done in two different ways and find the dimensions in each case_40 cm20 cm 2. (12 pts-) An box with a square base volume of 1500 cm? is to be constructed by taking a piece of cardboard 20 cm by 40 cm, cutting squares of side length X cm from each corner, and folding up the sides_ Use a graph to show that this can be done in two different ways and find the dimensions in eac... ##### Question 60 / 2 ptsLet's assume there is a population with values 1,2,3,4 where the value 1 occurs with probability 40% values occur with probability 20% each_You get sample with one observation: 2_Which of the following statements is true?(A) Population mean (U ) is 2.5(B) Sample mean X) is 2.2(C) ti 1,2,3,4(D) Sample mean (z) is 2 Question 6 0 / 2 pts Let's assume there is a population with values 1,2,3,4 where the value 1 occurs with probability 40% values occur with probability 20% each_ You get sample with one observation: 2_ Which of the following statements is true? (A) Population mean (U ) is 2.5 (B) Sample mean X)... ##### Suppose a uniform electric field of magnitude $100.0 \mathrm{N} / \mathrm{C}$ exists in a region of space. How far apart are a pair of equipotential surfaces whose potentials differ by 1.0 V? Suppose a uniform electric field of magnitude $100.0 \mathrm{N} / \mathrm{C}$ exists in a region of space. How far apart are a pair of equipotential surfaces whose potentials differ by 1.0 V?... ##### 17 . The Michaelis constant_ Km, of an enzyme-catalyzed reaction is the substrate concentration at which Va Vmax: (Select the correct fraction or percentage) A) 100% D) 25% B) 200% E) None of the above C) 50% 17 . The Michaelis constant_ Km, of an enzyme-catalyzed reaction is the substrate concentration at which Va Vmax: (Select the correct fraction or percentage) A) 100% D) 25% B) 200% E) None of the above C) 50%... ##### Ignoring stereoisomers, draw the two possible enols for 2 -butanone $left(mathrm{CH}_{3} mathrm{COCH}_{2} mathrm{CH}_{3}ight)$, and predict which one is more stable. Ignoring stereoisomers, draw the two possible enols for 2 -butanone $left(mathrm{CH}_{3} mathrm{COCH}_{2} mathrm{CH}_{3} ight)$, and predict which one is more stable.... ##### 1-, - . 11. UN JUURUS NUVC Ucuncul VIC ! 2-2. A four-cylinder, two-stroke cycle diesel... 1-, - . 11. UN JUURUS NUVC Ucuncul VIC ! 2-2. A four-cylinder, two-stroke cycle diesel engine with 10.9-cm bore and 12.6-cm stroke produces 88 kW of brake power at 2000 RPM. Compression ratio re = 18:1. Calculate: (a) Engine displacement. [cm, L) (b) Brake mean effective pressure. [kPa] (c) Torque. ... ##### 17.A mining company is considering a new project. Because the mine has received a permit, the... 17.A mining company is considering a new project. Because the mine has received a permit, the project would be legal; but it would cause significant harm to a nearby river. The firm could spend an additional \$10.66 million at Year 0 to mitigate the environmental Problem, but it would not be required... ##### Calculate cna pH at 25 "C of0 0 IJM salutio sudium hynochlerte (cio) Mate that hypochlorous acid (HCIO)wrok acld with apk,Of7,C,Apuno Your Onsterdecimal plecz: Calculate cna pH at 25 "C of0 0 IJM salutio sudium hynochlerte (cio) Mate that hypochlorous acid (HCIO) wrok acld with apk,Of7,C, Apuno Your Onster decimal plecz:... ##### 7. Find the moments of inertia !, Iy- and f0 for the lamina bounded by the lintes Y =0 V = 22 . and +2y with density function p(s.y) = r 7. Find the moments of inertia !, Iy- and f0 for the lamina bounded by the lintes Y =0 V = 22 . and +2y with density function p(s.y) = r... ##### Mahc glvenIJn UJsIspulayA] Mahc glven IJn UJsIs pulayA]... ##### A complete audiometric assessment is comprised of many tests--pure-tone air- and bone-conduction audiometry, speech audiometry (both... A complete audiometric assessment is comprised of many tests--pure-tone air- and bone-conduction audiometry, speech audiometry (both threshold and suprathreshold determination), otoacoustic emissions, and middle ear analysis (tympanometry, static acoustic compliance, and acoustic reflex threshold). ... ##### What is the function F(X,Y,Z) implemented by the decoder below note: The decoder produces minterms aaaaaaaa... What is the function F(X,Y,Z) implemented by the decoder below note: The decoder produces minterms aaaaaaaa JOUWNHO ZAO Y-A1 XGA2 Select one or more: Answer is not listed (0,2,5,7) 0 2(1,3,4,6) (1,3,4,6) 0 (0,2,5,7)... ##### Sicrcoisomcry share the same connectivity and differ only in the way their atoms are arranged in space.Draw the stcture 0f 4 compound that is 4 Slereolsomer of cis-I,4-dibromocycloherane(Note that the question asks for different stereoisomer 0f the nared compourd and not Ihe named compound itself)Use the wedgehash bond tools {0 indicale stereochernistry where exists _ cases where there more than one answer just croncChendao Ilc Sicrcoisomcry share the same connectivity and differ only in the way their atoms are arranged in space.Draw the stcture 0f 4 compound that is 4 Slereolsomer of cis-I,4-dibromocycloherane (Note that the question asks for different stereoisomer 0f the nared compourd and not Ihe named compound itself) ... ##### The specific heat of substance jS the amount of heat required 10 Taise the temperature of ore gram of the substance by one degree Celsius. The relationship bctwccn the nmnn of bcat guined or released by substance and the change temperature the substance is given by the equation=msATwhere is the heat gained or released, is the mass of the substance, Is the specifie heat of the substance, and AT is the change in tcmperature_Rearrange the cquation to solve forWhen substance with Est ot 34.8 heat ot The specific heat of substance jS the amount of heat required 10 Taise the temperature of ore gram of the substance by one degree Celsius. The relationship bctwccn the nmnn of bcat guined or released by substance and the change temperature the substance is given by the equation =msAT where is the he... ##### A Co-ed in-door soccer team has a total of 7 men and 6 women. A Co-ed... A Co-ed in-door soccer team has a total of 7 men and 6 women. A Co-ed team is required to have three or four women on the roster. A roster consists of 7 soccer players. In how many ways can this be done?...
proofpile-shard-0030-381
{ "provenance": "003.jsonl.gz:382" }
### Author Topic: Frequency and Channels of a Custom Sound  (Read 2351 times) 0 Members and 1 Guest are viewing this topic. #### Hot_Dog • If you can't find a cat, look for its tail. • CoT Emeritus • LV11 Super Veteran (Next: 3000) • Posts: 2966 • Rating: +445/-10 ##### Frequency and Channels of a Custom Sound « on: August 18, 2010, 12:32:28 am » For those of you who think you will laugh and point at me for asking a reasonable question, please leave immediately. So, this is part of the code for PlayWav: Code: [Select] di            ld   a, 0FDh out  (01h), a  main: ld   a, (hl)  and  0Fh ld   c, a     call Label15 ld   a, (hl)  srl  a        srl  a        srl  a        srl  a        and  0Fh ld   c, a push iy pop  iy ;slow this down a little call Label15 inc  hl       dec  de       ld   a, d     push iy ;slow this down a little pop  iy or   e        jr   nz,main ei          ret          quit: pop af pop af ;fix the stack pop af retLabel15: ld   b, c     inc  b        ld   a, 0D3hLabel18: out  (00h), a   djnz Label18 ld   a, 10h sub  c        ld   b, a     ld   a, 0D0hLabel19: out  (00h), a   djnz Label19 ret 1. If I were to use a custom sound, one single recorded note, would it be possible to adjust the frequency and duration of that sound to allow more notes? 2. Would it be possible to have 2--or even 4--channels of said single recorded notes at said frequencies and durations? I'm not asking for code (cause I have an idea of what I want to do), I'm just asking if it's reasonably possible. There are people who can speak two languages, and they are called bilingual.  There are people who speak three languages and are therefore trilingual.  Then there are people who speak one language, and these people are called Americans. #### DJ Omnimaga • CodeWalrus staff & retired Omni founder • CoT Emeritus • LV15 Omnimagician (Next: --) • Posts: 55811 • Rating: +3145/-232 • Remember when the walrus started to fly ##### Re: Frequency and Channels of a Custom Sound « Reply #1 on: August 18, 2010, 01:14:42 pm » For those of you who think you will laugh and point at me for asking a reasonable question, please leave immediately. Don't worry that won't happen here (if someone does, he won't be able to do it more than twice) In case you are wondering where I went, I am still regularly active in the TI community. I just left Omnimaga several months ago for various reasons. You can now reach me on CodeWalrus at http://codewalr.us . Sorry for the inconveniences. #### calc84maniac • Epic z80 roflpwner • Coder Of Tomorrow • LV11 Super Veteran (Next: 3000) • Posts: 2881 • Rating: +455/-17 ##### Re: Frequency and Channels of a Custom Sound « Reply #2 on: August 18, 2010, 06:52:23 pm » So are you wanting some sort of WAV playback, or do you want beeps? "Most people ask, 'What does a thing do?' Hackers ask, 'What can I make it do?'" - Pablos Holman #### Hot_Dog • If you can't find a cat, look for its tail. • CoT Emeritus • LV11 Super Veteran (Next: 3000) • Posts: 2966 • Rating: +445/-10 ##### Re: Frequency and Channels of a Custom Sound « Reply #3 on: August 18, 2010, 07:03:17 pm » So are you wanting some sort of WAV playback, or do you want beeps? WAV.  What I'm looking at is a recorded note (such as .1 seconds of violin, like they do with midi), and since the code above can play waves, I'm wondering if that note can be adjusted for length and pitch, and played in two or 4 channels. There are people who can speak two languages, and they are called bilingual.  There are people who speak three languages and are therefore trilingual.  Then there are people who speak one language, and these people are called Americans. #### Runer112 • Moderator • LV11 Super Veteran (Next: 3000) • Posts: 2175 • Rating: +623/-31 ##### Re: Frequency and Channels of a Custom Sound « Reply #4 on: August 18, 2010, 10:00:39 pm » Your routine is built to play how many bits per sample, and at what sample rate? #### Hot_Dog • If you can't find a cat, look for its tail. • CoT Emeritus • LV11 Super Veteran (Next: 3000) • Posts: 2966 • Rating: +445/-10 ##### Re: Frequency and Channels of a Custom Sound « Reply #5 on: August 18, 2010, 10:17:17 pm » Your routine is built to play how many bits per sample, and at what sample rate? Well, it's not actually my routine.  But it is 11 Khz, 8-bit mono There are people who can speak two languages, and they are called bilingual.  There are people who speak three languages and are therefore trilingual.  Then there are people who speak one language, and these people are called Americans. #### calc84maniac • Epic z80 roflpwner • Coder Of Tomorrow • LV11 Super Veteran (Next: 3000) • Posts: 2881 • Rating: +455/-17 ##### Re: Frequency and Channels of a Custom Sound « Reply #6 on: August 18, 2010, 10:19:40 pm » It's actually 4-bit, I believe. "Most people ask, 'What does a thing do?' Hackers ask, 'What can I make it do?'" - Pablos Holman #### Hot_Dog • If you can't find a cat, look for its tail. • CoT Emeritus • LV11 Super Veteran (Next: 3000) • Posts: 2966 • Rating: +445/-10 ##### Re: Frequency and Channels of a Custom Sound « Reply #7 on: August 18, 2010, 10:23:41 pm » It's actually 4-bit, I believe. Alright, granted, but that still doesn't answer my questions EDIT:  At least the file sent and converted to a program is 8-bit « Last Edit: August 18, 2010, 10:24:57 pm by Hot_Dog » There are people who can speak two languages, and they are called bilingual.  There are people who speak three languages and are therefore trilingual.  Then there are people who speak one language, and these people are called Americans. #### ztrumpet • The Rarely Active One • CoT Emeritus • LV13 Extreme Addict (Next: 9001) • Posts: 5714 • Rating: +364/-4 • If you see this, send me a PM. Just for fun. ##### Re: Frequency and Channels of a Custom Sound « Reply #8 on: August 18, 2010, 10:24:52 pm » This also looks like an interesting topic.  I'm curious as well.  Thanks for asking a great question Hot Dog. If I'm wrong, please correct me! Unfinished Projects: Elmgon 14% Basic Movement Demo Homescreen Game Pack 80% Basic Latest Release Cube Droid Saves the Galaxy 65% Axe Demo Detonate 70% Axe Completed Projects: Exodus | Midnight |Drifter | Axe Snake | Jump! | Factory Theta | Spider | Plot Drop | Papi Jump | Numb3rs | Nibbler | Boost | Duel Tile Map Editor | Homescreen Map Editor | Key Group Check | Oasis #### Hot_Dog • If you can't find a cat, look for its tail. • CoT Emeritus • LV11 Super Veteran (Next: 3000) • Posts: 2966 • Rating: +445/-10 ##### Re: Frequency and Channels of a Custom Sound « Reply #9 on: August 18, 2010, 10:26:02 pm » This also looks like an interesting topic.  I'm curious as well.  Thanks for asking a great question Hot Dog. Well, I'm tossing around the idea with having about 16 "midi" instruments and making music for S.A.D. for 15 Mhz calculators.  If I can't get enough channels and can't change single notes, however, it ain't going to happen There are people who can speak two languages, and they are called bilingual.  There are people who speak three languages and are therefore trilingual.  Then there are people who speak one language, and these people are called Americans. #### calc84maniac • Epic z80 roflpwner • Coder Of Tomorrow • LV11 Super Veteran (Next: 3000) • Posts: 2881 • Rating: +455/-17 ##### Re: Frequency and Channels of a Custom Sound « Reply #10 on: August 18, 2010, 10:28:19 pm » This also looks like an interesting topic.  I'm curious as well.  Thanks for asking a great question Hot Dog. Well, I'm tossing around the idea with having about 16 "midi" instruments and making music for S.A.D. for 15 Mhz calculators.  If I can't get enough channels and can't change single notes, however, it ain't going to happen So you're planning to run this on top of a game? :O I'm not sure how possible this is when using all of the processing time. "Most people ask, 'What does a thing do?' Hackers ask, 'What can I make it do?'" - Pablos Holman #### Hot_Dog • If you can't find a cat, look for its tail. • CoT Emeritus • LV11 Super Veteran (Next: 3000) • Posts: 2966 • Rating: +445/-10 ##### Re: Frequency and Channels of a Custom Sound « Reply #11 on: August 18, 2010, 10:30:10 pm » This also looks like an interesting topic.  I'm curious as well.  Thanks for asking a great question Hot Dog. Well, I'm tossing around the idea with having about 16 "midi" instruments and making music for S.A.D. for 15 Mhz calculators.  If I can't get enough channels and can't change single notes, however, it ain't going to happen So you're planning to run this on top of a game? :O I'm not sure how possible this is when using all of the processing time. Like I said, it's an idea I'm tossing around.  But if nobody knows if it's possible for many channels, as well as if it's possible to have a single recorded note and change pitch and duration... * Hot Dog runs off to experiment EDIT:  I'll keep up on this post as to what I find out, for those of you curious « Last Edit: August 18, 2010, 10:32:41 pm by Hot_Dog » There are people who can speak two languages, and they are called bilingual.  There are people who speak three languages and are therefore trilingual.  Then there are people who speak one language, and these people are called Americans. #### calc84maniac • Epic z80 roflpwner • Coder Of Tomorrow • LV11 Super Veteran (Next: 3000) • Posts: 2881 • Rating: +455/-17 ##### Re: Frequency and Channels of a Custom Sound « Reply #12 on: August 18, 2010, 10:32:45 pm » I believe that with WAV sound, the pitch/speed are linked. This is why typically when speeding up the sound goes higher and slowing down it goes lower. Apparently changing the speed without changing the pitch or vice versa is a problem even on PCs. "Most people ask, 'What does a thing do?' Hackers ask, 'What can I make it do?'" - Pablos Holman #### Hot_Dog • If you can't find a cat, look for its tail. • CoT Emeritus • LV11 Super Veteran (Next: 3000) • Posts: 2966 • Rating: +445/-10 ##### Re: Frequency and Channels of a Custom Sound « Reply #13 on: August 18, 2010, 10:35:14 pm » Well if it can be sped up or down, that's not an issue.  I don't care about the speed since most notes will repeat (in order to be held out) anyways.  After all, on computer midi, if you start with a single recorded note (like a piano note), the sound will run faster and faster the higher up you go. « Last Edit: August 18, 2010, 10:35:56 pm by Hot_Dog » There are people who can speak two languages, and they are called bilingual.  There are people who speak three languages and are therefore trilingual.  Then there are people who speak one language, and these people are called Americans. #### Hot_Dog • If you can't find a cat, look for its tail. • CoT Emeritus • LV11 Super Veteran (Next: 3000) • Posts: 2966 • Rating: +445/-10 ##### Re: Frequency and Channels of a Custom Sound « Reply #14 on: August 19, 2010, 04:38:12 pm » Looks like I should leave the improbable to calc84maniac.  Using a short note to make a long note (such as a string orchestra) is almost impossible without hearing some clicks and pops, and if I have all short notes and no long background notes, it's not worth using recorded sounds instead of having all beeps (especially since beeps have no issues with long notes). However, I'm still thinking about music for 15 Mhz S.A.D. if I can think of some decent beep-beep music.  It's likely I'll do some 4 channel stuff « Last Edit: August 19, 2010, 04:43:54 pm by Hot_Dog » There are people who can speak two languages, and they are called bilingual.  There are people who speak three languages and are therefore trilingual.  Then there are people who speak one language, and these people are called Americans.
proofpile-shard-0030-382
{ "provenance": "003.jsonl.gz:383" }
• Views 1,196 • Citations 0 • ePub 35 • PDF 456 `Computational Biology JournalVolume 2013 (2013), Article ID 807592, 11 pageshttp://dx.doi.org/10.1155/2013/807592` Research Article ## Calculated Vibrational Properties of Ubisemiquinones Department of Physics and Astronomy, Georgia State University, 29 Peachtree Center Avenue, Atlanta, GA 30303, USA Received 15 October 2012; Accepted 27 November 2012 Copyright © 2013 Hari P. Lamichhane and Gary Hastings. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. B. Trumpower, Function of Quinones in Energy Conserving Systems, Academic Press, 1982. 2. B. Ke, “The bacterial photosynthetic reaction center: chemical composition and crystal structure,” in Photosynthesis: Photobiochemistry and Photobiophysics, pp. 47–62, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2001. 3. B. Ke, “The, “Stable” primary electron acceptor (${\text{Q}}_{\text{A}}$) of photosynthetic bacteria,” in Photosynthesis: Photobiochemistry and Photobiophysics, pp. 101–110, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2001. 4. B. Ke, “The secondary electron acceptor (${\text{Q}}_{\text{B}}$) of photosynthetic bacteria,” in Photobiochemistry and Photobiophysics, pp. 111–128, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2001. 5. N. Srinivasan and J. H. Golbeck, “Protein-cofactor interactions in bioenergetic complexes: the role of the A1A and A1B phylloquinones in Photosystem I,” Biochimica et Biophysica Acta, vol. 1787, no. 9, pp. 1057–1088, 2009. 6. C. A. Wraight and M. R. Gunner, “The acceptor quinones of purple photosynthetic Bacteria-structure and spectroscopy,” in The Purple Photosynthetic Bacteria, C. N. Hunter, F. Daldal, M. C. Thurnauer, and J. T. Beatty, Eds., pp. 379–405, Springer, 2009. 7. J. Breton and E. Nabedryk, “Protein-quinone interactions in the bacterial photosynthetic reaction center: light-induced FTIR difference spectroscopy of the quinone vibrations,” Biochimica et Biophysica Acta, vol. 1275, no. 1-2, pp. 84–90, 1996. 8. M. Bauscher and W. Mäntele, “Electrochemical and infrared-spectroscopic characterization of redox reactions of p-quinones,” Journal of Physical Chemistry, vol. 96, no. 26, pp. 11101–11108, 1992. 9. M. Bauscher, E. Nabedryk, K. Bagley, J. Breton, and W. Mantele, “Investigation of models for photosynthetic electron acceptors. Infrared spectroelectrochemistry of ubiquinone and its anions,” FEBS Letters, vol. 261, no. 1, pp. 191–195, 1990. 10. X. Zhao, T. Ogura, M. Okamura, and T. Kitagawa, “Observation of the resonance raman spectra of the semiquinones QA·- and QB·- in photosynthetic reaction centers from Rhodobacter sphaeroides R26,” Journal of the American Chemical Society, vol. 119, pp. 5263–5264, 1997. 11. G. Balakrishnan, P. Mohandas, and S. Umapathy, “Ab initio studies on structure and vibrational spectra of ubiquinone and its radical anion,” Spectrochimica Acta A, vol. 53, no. 10, pp. 1553–1561, 1997. 12. M. Nonella, “A density functional investigation of model molecules for ubisemiquinone radical anions,” Journal of Physical Chemistry B, vol. 102, no. 21, pp. 4217–4225, 1998. 13. M. J. Frisch, G. W. Trucks, H. B. Schlegel et al., 2004. 14. K. M. Bandaranayake, V. Sivakumar, R. Wang, and G. Hastings, “Modeling the A1 binding site in photosystem. I. Density functional theory for the calculation of “anion—neutral“ FTIR difference spectra of phylloquinone,” Vibrational Spectroscopy, vol. 42, no. 1, pp. 78–87, 2006. 15. E. Cancès, C. Le Bris, B. Mennucci, and J. Tomasi, “Integral equation methods for molecular scale calculations in the liquid phase,” Mathematical Models and Methods in Applied Sciences, vol. 9, no. 1, pp. 35–44, 1999. 16. E. Cancès, B. Mennucci, and J. Tomasi, “A new integral equation formalism for the polarizable continuum model: theoretical background and applications to Isotropic and anisotropic dielectrics,” Journal of Chemical Physics, vol. 107, no. 8, pp. 3032–3041, 1997. 17. J. Tomasi, B. Mennucci, and E. Cancès, “The IEF version of the PCM solvation method: an overview of a new method addressed to study molecular solutes at the QM ab initio level,” Journal of Molecular Structure, vol. 464, no. 1–3, pp. 211–226, 1999. 18. J. Tomasi, R. Cammi, B. Mennucci, C. Cappelli, and S. Corni, “Molecular properties in solution described with a continuum solvation model,” Physical Chemistry Chemical Physics, vol. 4, no. 23, pp. 5697–5712, 2002. 19. J. Tomasi, B. Mennucci, and R. Cammi, “Quantum mechanical continuum solvation models,” Chemical Reviews, vol. 105, no. 8, pp. 2999–3093, 2005. 20. J. M. L. Martin and C. Van Alsenoy, GAR2PED, University of Antwerp, 1995. 21. H. Lamichhane, R. Wang, and G. Hastings, “Comparison of calculated and experimental FTIR spectra of specifically labeled ubiquinones,” Vibrational Spectroscopy, vol. 55, no. 2, pp. 279–286, 2011. 22. C. Cappelli, C. O. Silva, and J. Tomasi, “Solvent effects on vibrational modes: ab-initio calculations, scaling and solvent functions with applications to the carbonyl stretch of dialkyl ketones,” Journal of Molecular Structure, vol. 544, pp. 191–203, 2001. 23. G. Hastings, K. M. P. Bandaranayake, and E. Carrion, “Time-resolved FTlR difference spectroscopy in combination with specific isotope labeling for the study of A1, the secondary electron acceptor in photosystem 1,” Biophysical Journal, vol. 94, no. 11, pp. 4383–4392, 2008. 24. J. Breton, J. R. Burie, C. Berthomieu, G. Berger, and E. Nabedry, “The binding sites of quinones in photosynthetic bacterial reaction centers investigated by light-induced FTIR difference spectroscopy: assignment of the ${\text{Q}}_{\text{A}}$ vibrations in Rhodobacter sphaeroides using 18O- Or13C-labeled ubiquinone and vitamin K1,” Biochemistry, vol. 33, no. 16, pp. 4953–4965, 1994. 25. R. Brudler, H. J. M. De Groot, W. B. S. Van Liemt et al., “Asymmetric binding of the 1- and 4- $\text{C}=\text{O}$ groups of ${\text{Q}}_{\text{A}}$ in Rhodobacter sphaeroides R26 reaction centres monitored by Fourier transform infra-red spectroscopy using site-specific isotopically labelled ubiquinone-10,” EMBO Journal, vol. 13, no. 23, pp. 5523–5530, 1994. 26. J. Breton, C. Boullais, G. Berger, C. Mioskowski, and E. Nabedryk, “Binding sites of quinones in photosynthetic bacterial reaction centers investigated by light-induced FTIR difference spectroscopy: symmetry of the carbonyl interactions and close equivalence of the QB vibrations in Rhodobacter sphaeroides and Rhodopseudomonas viridis probed by isotope labeling,” Biochemistry, vol. 34, no. 36, pp. 11606–11616, 1995. 27. R. Brudler, H. J. M. De Groot, W. B. S. Van Liemt et al., “FTIR spectroscopy shows weak symmetric hydrogen bonding of the ${\text{Q}}_{\text{B}}$ carbonyl groups in Rhodobacter sphaeroides R26 reaction centres,” FEBS Letters, vol. 370, no. 1-2, pp. 88–92, 1995.
proofpile-shard-0030-383
{ "provenance": "003.jsonl.gz:384" }
The idea was simple: attach the GPS module to my PC, read the data using Python script and make it open Google map with the exact. Exercise 3: A Java program that implements a top-down syntax analyzer based on a specific grammar. JR's 2-channel AM pistol-grip Python radio has two important features that other radios in this price range can't match, making it the radio of choice for beginners and budget minded racers. In Pulse Position Modulation the amplitude of the pulse is kept constant as in the case of the FM and PWM to avoid noise interference. Therefore, the transmission of the carrier wave is waste of power. Then the required channel bandwidth for an SSB signal is W. One way to generate even more complex sound effects maybe to generate multiple AM and/or FM waves and add them together. Quadrature Amplitude Modulation • QAM is a combination of ASK and PSK – Two different signals sent simultaneously on the same carrier frequency – – Change phase and amplitude as function of input data 1 +d. AMPLITUDE MODULATION(AM) Amplitude Modulation (AM) Python's simple, easy to learn syntax emphasizes readability and therefore reduces the cost of program maintenance. The XR Series features the largest voltage range in Magna-Power's product offering, from 5 Vdc to 10,000 Vdc, all…. This is different from what we saw in the AM scheme, where the only sinusoidal modulation signal would originate only two lateral frequencies. Powered by Amplitude Modulation (AM) Technology. PWM rapidly turns the output pin high and low over a fixed period of time. The Mode list includes AM, FM, WFM, USB, LSB, CW/U, and CW/L. The upper and. Connecting a PC Printer Port to Electronics with Python is closely related to Raspberry Pi. • Raw data • AM - Amplitude Modulation •OOK – On / Off Keying XCON 2013 RFIDler a Software Defined RFID tool. The module includes a GUI implementation, which facilitates the amplitude modulation analysis by allowing changing parameters on line. Expert Answer. Modulate the phase, frequency or amplitude, or generate triggered bursts or sweeps from an internal or external source. Prior to this, information was transmitted via on/off keying of a continuous wave transmitter using Morse code or some equivalent. Mathys Lab 8: BPSK, Amplitude and Frequency Shift Keying, Signal Space 1 Introduction Amplitude modulation (AM) can easily be used for the transmission of digital data if the. The following outlines the Python code used:. The basic difference between continuous wave Modulation and Pulse modulation is : In continuous wave modulation (amplitude modulation, frequency modulation, phase modulation) the carrier wave used is continuous in nature, while in case of pulse modulation, the carrier wave is in the form of pulses. 0 Purpose This appendix was prepared with the cooperation and assistance of the Range Commanders Council (RCC) Frequency Management Group. The default value for opt is 0. RF Signal Generators. I have tried throttling the individual python processes with renice, to no avail. To conclude, for a given bit rate, the bandwidth used was divided by m compared to the BPSK modulation, with m being the total number of bits per symbol. In Audacity it can be seen as an offset of the recorded waveform away from the center zero point. com/ Page 12 Information signal is known as baseband signal or modulating signal. Analog and Digital Modulation Toolkit for Software Defined Radio python sees each block as a separate class defined in Amplitude modulation changes the amplitude of a radio wave in concert AM Modulation We can see that an upconversion mixer is a natural amplitude modulator If the input to the mixer is a baseband signal A(t), then the output is. This project uses Python script, which uses RPi. DEF_PHASE_LIST(1, phlist, phinc) applies the function DEF_PHASE_LIST defined in the table of the last chapter to the pulse program. 975Mhz, which makes the audio signal. The projects here too are in Python and with modifications will work on Raspberry Pi and vise-versa. The high-affinity zinc-uptake system znuACB is under control of the iron-uptake regulator (fur) gene in the animal pathogen Pasteurella multocida. 7 program to generate a pulse width modulation on digital I/O 0 of the Analog Discovery. Read a file line by line and print it. ('AM Modulation') root. Amplitude modulation is first type of modulation used for transmitting messages for long distances by the mankind. (Learn Coding Fast with Hands-On Project Book 1. One of the simplest forms of …. The program works on any audio file or a sound could be recorded using the code itself when running. The simplest form of AM can be shown in this equation, given a signal. Qam modulation matlab pdf The example uses baseband 16-QAM quadrature amplitude modulation as the. 5 times the carrier power. PILOT/SIGNATURE PATTERN BASED MODULATION TRACKING 6. Before getting into alternate uses, let’s look at the fundamental stages of an envelop and variations you may come across. I have taught Assembly Language programming of Intel-compatible chips as well. Modulation fsk on android. 1 Trick Nice for testing out python scripts in command lind python << EOF import sys print(sys. Any one of these chords can be used to transition smoothly from C major to G major. I'm soon to be married to my best friend and we have 2 energetic little boys together. Expert Answer copyable code program code clc clear all close all; %Ampltitude modulation signal TRANSMITTER % %signal generate Am=5; % Amplitude f view the full answer. Pulse-code modulation (PCM) is a method used to digitally represent sampled analog signals. the AM radio band (broadcast band) is legally designed from 535 KHz to 1605KHz. The start of the. Place the following code in an instruments. FM, ØM and AM modulation, internal sine or external input C#, Python, Keysight VEE, National Instruments LabVIEW and MathWorks MATLAB. In Python and generally speaking, the modulo (or modulus) is referred to the remainder from the division of the first argument to the second. 7, Win 7 arcpy python arcgis-10. The default value for opt is 0. Moreover a programmable logic device provides PMD with phase signals (A1…A4,chapter 3) and could vary modulation frequency for increasing ambiguity range or. Here are some examples of palindromes: malayalam, gag, appa, amma. Three forms of modulation are amplitude, frequency, and phase. This further agrees with your Add Const later on which is subtracting the offset resulting from the carrier. Matlab script simulating the modulating and demodulation of an AM wave - AM3_25. The data being transmitted is GPS NMEA strings which are embedded as the audio channel in video files. Tingnan ang kompletong profile sa LinkedIn at matuklasan ang mga koneksyon at trabaho sa kaparehong mga kompanya ni Andrew. Pulse Width Modulation In the PWM technique, we produce a square wave with a controllable duty cycle. while complex or IQ sampling at the carrier frequency, or very near, instead of at much higher than 2X. I have taught Assembly Language programming of Intel-compatible chips as well as PC hardware interfacing. Amplitude Modulation (AM) Amplitude modulation represents two binary values (0 and 1) of digital data by two different amplitudes of the carrier signal, keeping frequency and phase constant. DC offset is a potential source of clicks, distortion and loss of audio volume. When a carrier is amplitude-modulated with a pure sine wave, up to 1/3 (33percent) of the overall signal power is contained in the sidebands. 7, Win 7 arcpy python arcgis-10. I wondered if there is some kind if implicit PWM throttling as a non shell launched python instance. From what I saw on the internet I have two possibilities: (I) generate brown noise and modulate it, or (II) get a brown noise wave file and. Python, SQL, Excel, Matlab, pandas, numpy, scikit-learn, NLTK, Beautiful Soup. Abstract: Following the work of the AM Working Group (AMWG) for the UK Institute of Acoustics (IOA), a method for the quantification of amplitude modulation from wind turbines has been proposed. On this page, we'll get to know our new friend the Fourier Transform a little better. Thanks in advance. where F is the Fourier transform, U the unit step function, and y the Hilbert transform of x. The process of recovering the message signal from the received modulated signal is known as demodulation. A former member of the Robotics Society 2. The easiest way to install them all (and then some) is to download and install the wonderful Sage package. In the special case of β being smaller than 1, just the Bessel coefficients , J0 and J1 will have a significant value and, in this case, the FM signal will be formed by the carrier frequency component. amplitude modulation; frequency modulation; homework. Currently, I am working on YOLO and SSD and will share my learnings on how they deal with small objects. Hello fellow Python coders, I'm trying to build a simple FM synthesizer in Python. Amplitude Modulation is used in communications to send a signal over distances. -In this episode, we will have a look at the QPSK modulation, the Quadrature Phase Shift Keying modulation. Amplitude and frequency modulation (AM and FM) as well as frequency and phase shift keying (FSK, PSK) and 2-dimensional quadrature amplitude modulation (QAM) signal constellations can be explored with the help of the tools provided by the GNU Radio software. In AM, we have an equation that looks like this: {\displaystyle A_ {signal} (t)=A (t)\sin (\omega t)} We can also see that the phase of this wave is irrelevant, and does not change (so we don't even include it in the equation). When modulation effects don't constitute as a defining factor in the sound of a specific instrument, like the overdriven phase guitar above, you can use automation for an interesting effect. Hello all, I have a basic understanding of python. I want the carrier signal NOT the AM modulation. Read on O'Reilly Online Learning with a 10-day trial Start your free trial now Buy on Amazon. amplitude modulation; frequency modulation; homework. Methods in Python are associated with object instances while function are not. See the complete profile on LinkedIn and discover Bagrat’s connections and jobs at similar companies. Exercise 3: A Java program that implements a top-down syntax analyzer based on a specific grammar. Both pregnancy and IBD are associated with altered immunology and intestinal microbiology. This means that the transmitted signal is spread out in frequency over a bandwidth which is twice the highest frequency in the signal. 7 (29 ratings) Course Ratings are calculated from individual students' ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately. I also use C++ and Java, often with Python. Some simple properties of the Fourier Transform will be presented with even simpler proofs. Amplitude Modulation csound has many opcodes related to frequency modulation. In this article, I try to show how a baseband signal is transformed under FM modulation using a Python implementation. b)Findf0(x). You use functions in programming to bundle a set of instructions that you want to use repeatedly or that, because of their complexity, are better self-contained in a sub-program and called when needed. Amplitude modulation is first type of modulation used for transmitting messages for long distances by the mankind. In commercial broadcast applications, for a purely monaural station, the maximum modulation index = 75/15 = 5, coming from: the maximum carrier deviation = 75 kHz, and maximum modulation frequency = 15 kHz. I have seen multiple mathematical notations, may of which follow one algorithm or another but since I am useless at mathematics I am unable to translate this into code. The components (f H-f L) and (f H +f L) are known as. Amplitude modulation is waste of bandwidth. I wondered if there is some kind if implicit PWM throttling as a non shell launched python instance. The carrier wave c(t) is completely independent of the information-bearing signal m(t). I am working on recommendation application I using firebase to store information about userI have used checkboxes for health status. Creator of the MotorPiTX robotics add-on board for the Raspberry Pi. If the modulation index $\mu=1$ then the power of AM wave is equal to 1. Modulation and Demodulation Chapter 9. 025Mhz and 99. class nidaqmx. As a beginner, I take 'FM synthesizer' to mean: "using a sine wave to control the frequency of another sine wave. This signal will also have a FM modulation. To understand frequency modulation, you have to understand the amplitude modulation. Python program : To find the longest Palindrome As we all know, a palindrome is a word that equals its reverse. Setting up the Python script. Collector modulation method is the example of High level Amplitude Modulation. A modulating wave, which in theory could be another sine wave, typically at a lower audio frequency is superimposed upon the. Introduction In AM schemes, the modulation index refers to the amplitude ratio of the modulating signal to the carrier signal. Quadrature Amplitude Modulation • QAM is a combination of ASK and PSK – Two different signals sent simultaneously on the same carrier frequency – – Change phase and amplitude as function of input data 1 +d. The output of a delta modulator is a bit stream of samples, at a relatively high rate. This video example will demonstrate how to save a screen capture with a few commands and tools using Python as the programming language. The Raspberry Pi can be programmed by using various programming platforms. Index tells you how much the AM/FM modulation signals affect your output signal relative to the carrier signal. The modulated signal has the information in the whole band except at the carrier frequency. Then call the radio. I have seen multiple mathematical notations, may of which follow one algorithm or another but since I am useless at mathematics I am unable to translate this into code. May 6, 2020 AT 9:30 am ICYMI Python on Microcontrollers: ESP32-S2 Hack Chat, CircuitPython 5. provide the first evidence that formant perception in non-speech sounds is improved by F0 modulation. Jude made a $40 million equity investment in Spinal Modulation, a company that has developed Axium, an "innovative neuromodulation therapy that provides a new pain management option for patients with chronic, intractable pain," a company press release revealed. The high-affinity zinc-uptake system znuACB is under control of the iron-uptake regulator (fur) gene in the animal pathogen Pasteurella multocida. The camera has a slot for a micro sd card. Remove all the lines that contain the character `a’ in a file and write it to another file. I'd like your inputs on a good starting point since it is so vast, and if there are resources out there to help navigate this route. Examples of this would be analogue TV or radio station transmissions. Amplitude Modulation By Sasmita December 1, 2015. Analog and Digital Modulation Toolkit for Software Defined Radio python sees each block as a separate class defined in Amplitude modulation changes the amplitude of a radio wave in concert AM Modulation We can see that an upconversion mixer is a natural amplitude modulator If the input to the mixer is a baseband signal A(t), then the output is. If the modulation index$\mu=1\$ then the power of AM wave is equal to 1. Thanks to this you can easily tell if something was send over air. Question 45. With the help of Fast-Fourier-Transforms (FFT), the modulation index can be obtained by measuring the sideband amplitude and the carrier amplitude. AM (Amplitude Modulation) Amplitude modulation by a carrier sine wave is by far the most common regarding usage. 2 (107 ratings) Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately. Hybrid screening (also known as “cross modulated screening” and by many other names) places the miniscule FM dots on a regularly spaced AM grid. b)Findf0(x). 5 times the carrier power. Lab 1: Amplitude Modulator and Demodulator Objective. After few trial and error, it started working as expected. Modulation Types Analog Methods nAmplitude Modulation (AM) nSingle-Sideband Suppressed-Carrier (SSB-SC-AM) nDouble-Sideband Suppressed-Carrier (DSB-SC-AM) nnFrequency Modulation (FM) Digital Methods nnAmplitude and Phase-Shift Keying (APSK) nAPSK-16 nAPSK-32 nnAmplitude-Shift Keying (ASK) nASK-2 nASK-4 nASK-8 nnContinuous Phase Modulation (CPM). The camera has a slot for a micro sd card. Amplitude modulation (AM) is a modulation technique used in electronic communication, most commonly for transmitting information via a radio carrier wave. A single program which plot Spectrum of AM modulation. a)Plotfonthedomainx∈[0,2π]. In Python, the modulo ‘%’ operator works as follows: The numbers are first converted in the common type. I want to remove this AM modulation so that only FM modulation remains. 2 (107 ratings) Course Ratings are calculated from individual students' ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately. Corresponding to every analog modulation technique, i. 5 is compiled for Windows (XP onwards). is some radian frequency oscillation rate, is a possible phase offset, and is obviously time. Amplitude modulation has wasteful of power. Python SQL SQLite Ring modulation (AM) 5m 35s Waveshaping. Register and join over 40,000 other developers!. The high-affinity zinc-uptake system znuACB is under control of the iron-uptake regulator (fur) gene in the animal pathogen Pasteurella multocida. But I can't remember what design we used -- the one that was the modem or the one that at least one physics professor said was technically not a modem (but install one in my office too _PLEASE_) In those. [PMC free article]. There are two basic kinds of modulation used to transmit audio via radio frequency signals: amplitude modulation (AM) and frequency modulation (FM). raw download clone embed report print Python 4. Related: (via the Hockey Schtick) Solar physicist Dr. With the right software these dongles can be used as an SDR, since the Realtek RTL2832U allows transferring the raw I/Q samples. Design Faecal and serum samples were collected from 46 IBD patients (31 Crohn’s disease (CD) and 15 UC) and 179 healthy. The amplitude or strength of the high frequency carrier wave is modified in accordance with amplitude of the message signal. I'm a new user to python. A former member of the Robotics Society 2. Introduction Digital modulation is extensively used in telecommunications, radar systems and quantum technologies. The demodulation. Sehen Sie sich das Profil von Philipp Kügler auf LinkedIn an, dem weltweit größten beruflichen Netzwerk. Simple Python FM Synthesis. One of the most popular programming environments for the Raspberry Pi is the Python IDLE. But when you superimpose a signal on the carrier by AM or FM, you produce sidebands at the sum and difference of the carrier frequency f C and modulation frequency f M. Envelope Demodulation The envelope demodulator is a simple and very efficient device which is suitable for the detection of a narrowband AM signal. The AM-FM repre-1As a practical matter, the page limit prevents us from providing the the-. Explain The Square Law Demodulation and Envelope Demodulation of AM Wave. So, the power required for transmitting an AM wave is 1. It's not just the Hams who are turning green after reading this article, but any competent RF engineer… The point is that generating the modulation (AM, FM, SSB, etc) in software is relatively easy, but building a clean wide-band RF synthesizer is extraordinarily difficult. the IFFT is a method of modulating the data onto the carriers, plus summing up of all harmonics to get the transmitted signal. By Daniel Kortzak, Jan‐Philipp Machtens, Christoph Fahlke and colleagues: Allosteric gate modulation confers K + coupling in glutamate transporters. Thus, SSB modulation requires half the bandwidth of AM or DSBSC-AM modulation. For example C major and G major share four chords in common: C, Em, G, and Am. Check out our Community Blogs. This is called frequency modulation or FM. From radio waves to packets with software defined radio. This book, an essential guide for understanding the implementation aspects of a digital modulation system, shows how to simulate and model a. Previous posts: Python. A item that has a substantial taste , so you will be more comfortable in using it. scipy IIR design: Introduction and low-pass; Python. The start of the. I am trying to write a Python script that can demodulate an FSK modulated audio file and return the data encoded in the audio. This due to the advantage this provides compared to embedded system in feedback and ease of tests. This model is an 8-ary modulator/demodulator based on Pulse Amplitude Modulation (PAM). Modulation and Demodulation Chapter 9. This signal will also have a FM modulation. Modulation index can be defined as the measure of extent of amplitude variation about a unmodulated carrier. top: no modulation, baseline wander (BW), amplitude modulation (AM), and frequency modulation (FM). Resources listed under D-STAR category belongs to Operating Modes main collection, and get reviewed and rated by amateur radio operators. In the simplest example above a 1KHz, (square wave), carrier is amplitude modulated by a triangle wave at almost 75% modulation depth. Then techniques used to demodulate amplitude modulation (AM) can be applied. Leif Svalgaard has revised his reconstruction of sunspot observations over the past 400 years from 1611-2013. WFM, used in FM broadcasting, is the wideband version of FM (Frequency Modulation), while the latter is used in point-to-point communications. IT Policy Downloads Coronavirus 5G Developer Learn Python: Online training Top 2020 DevOps trends Top IT salaries. Amplitude Modulation (AM) Frequency Modulation (FM) Phase modulation (PM) Amplitute Modulation. Visit for free, full and secured software’s. The input bit pattern is "11010". Unfortunately, the application of such methods is not yet standard within the field of neuroscience. The basic difference between continuous wave Modulation and Pulse modulation is : In continuous wave modulation (amplitude modulation, frequency modulation, phase modulation) the carrier wave used is continuous in nature, while in case of pulse modulation, the carrier wave is in the form of pulses. If you are new to MATLAB, please go through our tutorials. d)How many critical values are there in the interval in part c)? Use the nsolve. My question is why do we need the low-pass filter before the AM demod (in fact WBFM as well)? Some say. amplitude modulation; frequency modulation; homework. Output will be written to ook. Processing Algorithms in Fortran and C (Prentice-Hall Signal Processing Series) Active Noise Control Systems: Algorithms and DSP Implementations (Wiley Series in Telecommunications and Signal Processing) Python: Learn Python in One Day and Learn It Well. 5 times the carrier power for a perfect modulation. N5173B EXG X-Series microwave analog signal generator offers 9 kHz to 40 GHz frequency coverage and is the cost-effective choice when you need to balance budget and performance. I have seen multiple mathematical notations, may of which follow one algorithm or another but since I am useless at mathematics I am unable to translate this into code. Expert Answer. Chapter 8 Frequency Modulation(FM) FM was invented and commercialized after AM. This is a tutorial on how to implement Pulse Width Modulated (PWM) in Raspberry Pi 2 and 3 using Python. elegans Forward Motor State" for consideration by eLife. Amplitude Modulation (AM): the amplitude of the carrier varies in accordance to the information signal 2. First, install the python-vxi11 library by running pip install python-vxi11. Electronic Communications. a)Plotfonthedomainx∈[0,2π]. Command below will start listening on ISM band: 433. I have taught Assembly Language programming of Intel-compatible chips as well. A practical way of understanding line-pairs is to think of them as pixels on a camera sensor, where a single line-pair corresponds to two pixels (Figure 2). Mosquitoes show an ability to avoid defensive hosts, but the mechanisms mediating these shifts in host preferences are unclear. Computational models in neuroscience typically contain many parameters that are poorly constrained by experimental data. It is the standard form of digital audio in computers, compact discs, digital telephony and other digital audio applications. Python, SQL, Excel, Matlab, pandas, numpy, scikit-learn, NLTK, Beautiful Soup. Weirdly though if I kill the rc. Out of many methods of frequency generation, this tutorial covers one of the simplest and most efficient circuits. Computer Networking Visit : python. Amplitude modulation (AM) is a modulation technique used in electronic communication, most commonly for transmitting information via a radio carrier wave. Modulation in Communication System for RF Engineers RAHRF152 4. I have seen multiple mathematical notations, may of which follow one algorithm or another but since I am useless at mathematics I am unable to translate this into code. In Quadrature Amplitude Modulation, two amplitude modulated signals are combined into a single channel, thereby doubling the effective bandwidth. Power, in the case of BPSK, is mainly concentrated in a 2Rb-large bandwidth. I/Q modulation allows twice the amount of information to be sent compared to basic AM. Multiplies y by a sinusoid of frequency fc and applies a fifth-order Butterworth lowpass filter using filtfilt. From what I saw on the internet I have two possibilities: (I) generate brown noise and modulate it, or (II) get a brown noise wave file and. MCUs are digital. I have adopted the convention that 100% FM modulation means excursions between 0 Hz and twice the carrier frequency -- for a carrier frequency of 1000 Hz, 100% modulation means swings between zero Hz. the AM radio band (broadcast band) is legally designed from 535 KHz to 1605KHz. The easiest way to install them all (and then some) is to download and install the wonderful Sage package. unit8_t data should be sent and received by the receiver. I am trying to use the RSA API to run some automated EVM measurements. Amplitude Modulation is used in communications to send a signal over distances. The waveforms module provides many useful functions. I had to use a technique called the Pulse Width Modulation ( PWM ) to fade LEDs. Then call the radio. AM, FM, and FSK modulation are available (AM modulation can also be done on the arbitrary waveforms). My question is why do we need the low-pass filter before the AM demod (in fact WBFM as well)? Some say. over the list to convert it to degrees. In Python, the modulo '%' operator works as follows: The numbers are first converted in the common type. Therefore, the transmission of the carrier wave is waste of power. Multiplies y by a sinusoid of frequency fc and applies a fifth-order Butterworth lowpass filter using filtfilt. The very same technique can be used to shape low-frequency content, in this case matched with a high-pass filter. AM (Amplitude Modulation) Amplitude modulation by a carrier sine wave is by far the most common regarding usage. local launched python process and restart within an interactive shell, no erratic behaviour. The Arduino Uno has six pins set aside for PWM( digital pins 3, 5, 6, 9, 10, and 11). values and we have the value of modulation index. Place the following code in an instruments. geometry ('300x350'). In AM modulation mode, a level of 100% simply means 100% modulation: But in FM mode, there is no widely accepted meaning for "100% modulation". You use functions in programming to bundle a set of instructions that you want to use repeatedly or that, because of their complexity, are better self-contained in a sub-program and called when needed. The carrier frequency is used because it makes it possible to transmit over much larger distances. I am totally new to python, so I tried to read and learn what I could but I cannot seem to do what I want. 1 TRANSMITTER AND RECEIVER Each modulated si gnal is preceded by a unique ‘N’ bit pilot sequence (Manton, JH 2001). First, I've never been all that wild about the FM synth sound, which I associate with chimey '80s keyboards like the Yamaha DX7. Installing MySQLdb for Python 3 in Windows My favorite Python connector for MySQL or MariaDB is MySQLdb, the problem with this connector is that it is complicated to install on Windows! I am creating this article for those who want to install MySQLdb for Python 3 for Windows. Like Amplitude modulation, an ASK is also linear and sensitive to atmospheric noise, distortions,propagation conditions on different routes in PSTN, etc. To separate build objects from the source tree, the package is configured from within a different directory (called release, below). Functions in Python. Finding the coefficients, F' m, in a Fourier Sine Series Fourier Sine Series: To find F m, multiply each side by sin(m't), where m' is another integer, and integrate: But: So: ! only the m' = m term contributes Dropping the ' from the m: ! yields the coefficients for any f(t)! 0. With numpy, you can add two arrays like they were normal numbers, and numpy takes care of the low level detail for you. Now, remember that if you are given a signal, you can measure the periods in the AM signal and calculate those frequencies: For the carrier frequency: measure the period of the fast oscillation. A switch in the transmitter shown in Figure 6. If you are new to MATLAB, please go through our tutorials. MATLAB Code of Amplitude Modulation and De-Modulation using Coherent detection is simulated for the beginners, specially for the students of Electrical (Telecom) Engineering start learning DSBSC and MATLAB programming. I had to use a technique called the Pulse Width Modulation ( PWM ) to fade LEDs. E8257D PSG analog signal generators deliver industry-leading output power, level accuracy, and phase noise, with frequency from 100 kHz to 67 GHz (extendable to 500 GHz) for testing advanced RF & microwave radar, communications, and control systems. class nidaqmx. This model is an 8-ary modulator/demodulator based on Pulse Amplitude Modulation (PAM). go2MONITOR has an easy to use, automated interface for monitoring tasks. Modulation is achieved by varying the phase of the basis functions depending on the message symbols. Matlab has a Radio Frequency Toolbox(rf toolbox). The camera has a slot for a micro sd card. 2019 pass out student. A “modulation-type label” just means the basic modulation scheme associated with the RF signal, such as binary phase-shift keying (BPSK), Gaussian Minimum-Shift Keying (GMSK), amplitude modulation (AM), etc. Chooseayrangethatallowsyouto clearlyseethecriticalvalue(s). On this page, we'll get to know our new friend the Fourier Transform a little better. In contrast, now I am going to add two components to the random shuffle to the modulation index - an in-phase component and a quadrature component. The modulated signal has the information in the whole band except at the carrier frequency. Generate 2independent waveforms with a sampling rate of 1 GS/s, uo to 250 MHz with an output voltage range of ± 1 V into 50 Ω. I wondered if there is some kind if implicit PWM throttling as a non shell launched python instance. Then call the radio. b)Findf0(x). Amplitude modulation is first type of modulation used for transmitting messages for long distances by the mankind. Here we present. Along with 555 IC timers, the circuit is designed around a Wien Bridge Oscillator circuit and a clamping circuit. Task 1: Amplitude Modulation. Python for Beginners with Hands-on Project. See the complete profile on LinkedIn and discover Asmad’s connections and jobs at similar companies. The results are shown in the figures below for all the four different waveforms. I have read the Waveforms SDK Manual and can not figure out the functions provided. I am a Data Scientist with a strong interest in providing data-driven and action-oriented solutions to high-impact business problems. I understand that the AM demod (for instance) usually has two steps, rectifying the signal to baseband signal and then applying low-pass filter to get rid of unwanted high frequencies (leftovers from the modulation) and get a nice wave. Generation of AM in MATLAB is a piece of cake. Univariate forecasting only (single column) and only monthly and daily data have been tested for suitability. I have a basic understanding of python. View Bagrat Alayan’s profile on LinkedIn, the world's largest professional community. In AM radio broadcasts, is the audio signal being transmitted (usually bandlimited to less than 10 kHz), and is the channel center frequency that one dials up on a radio receiver. Presynaptic peptidergic modulation of olfactory receptor neurons in Drosophila. A item that has a substantial taste , so you will be more comfortable in using it. The other 2/3 of the signal power is contained in the carrier, which does not contribute to the transfer of data. Search python code amplitude modulation, 300 result(s) found python gets real download address multi-threaded wget download Based on python , parsing out real broadcast addresses for Tencent video, download, uses multi-threading, wegt downloads. First I will describe basic theory around modulation and demodulation, then I will describe how to combine Cython and profiling to build a flexible and efficient Python application. But when you superimpose a signal on the carrier by AM or FM, you produce sidebands at the sum and difference of the carrier frequency f C and modulation frequency f M. (Learn Coding Fast with Hands-On Project Book 1. csdr is a command line tool to carry out DSP tasks for Software Defined Radio. This is also called a phase vector, or phasor. In the special case of β being smaller than 1, just the Bessel coefficients , J0 and J1 will have a significant value and, in this case, the FM signal will be formed by the carrier frequency component. Amplitude modulation is first type of modulation used for transmitting messages for long distances by the mankind. AMPLITUDE MODULATION(AM) Amplitude Modulation (AM) Python's simple, easy to learn syntax emphasizes readability and therefore reduces the cost of program maintenance. • The carrier frequency is the frequency to which the radio receiver is tuned for station selection. The included libcsdr library contains the DSP functions that csdr makes use of. This signal will also have a FM modulation. Audio Steganography Methods. " I tried to generate a tone of 1000 Hz that deviates 15 Hz six times a second. To separate build objects from the source tree, the package is configured from within a different directory (called release, below). I have seen multiple mathematical notations, may of which follow one algorithm or another but since I am useless at mathematics I am unable to translate this into code. I am using Java with the RSA API dll but i dont see in the include header files where i can setup or query EVM measuremetns for 802. QAMModem¶ class QAMModem (m) ¶. Exercise 2: A python program that generates random character sequences step-by-step based on a specific grammar. From what I saw on the internet I have two possibilities: (I) generate brown noise and modulate it, or (II) get a brown noise wave file and. I am a huge Raspberry Pi advocate, a hacker of hardware and builder of robots. This further agrees with your Add Const later on which is subtracting the offset resulting from the carrier. The ultrasonic transmitter is connected at the collector where the amplitude modulated signal is generated. You use functions in programming to bundle a set of instructions that you want to use repeatedly or that, because of their complexity, are better self-contained in a sub-program and called when needed. The instrument comes with a serial port and an optional IEEE-488 GPIB port. Frequency-shift keying (FSK) is a frequency modulation scheme in which digital information is transmitted through discrete frequency changes of a carrier wave. The smoothest way to modulate from one key to another is to use a pivot chord. Could you help me modify this MATLAB code for AM with carrier case. 6 2017-01-31 – Python fine-grained OS detection of WSL, Cygwin, etc. This little experiment teaches the basics about modulation and is an easy way to learn more about GNURadio. Scientific image by. The way I see it, avoid using the inbuilt functions. Before Quietnet can be run, it is necessary to install Python, as well as pyaudio and numpy (Numerical Python), on the operating system. The simplest form of AM can be shown in this equation, given a signal. Pass Filters and Distance. In this task, we'll transmit a sequence of message bits using an amplitude modulation of a carrier frequency -- your job is to write a Python function am_receive that decodes the transmission and returns a numpy array with the received bits. I am trying to make an idle from scratch with a slight modulation of having a testcase box and a output box using tkinter. In the special case of β being smaller than 1, just the Bessel coefficients , J0 and J1 will have a significant value and, in this case, the FM signal will be formed by the carrier frequency component. AM stands for Amplitude Modulation which, apart from Morse code, is the oldest form of radio communication. Asmad has 2 jobs listed on their profile. Quadrature Amplitude Modulation (QAM) PAM signals occupy twice the bandwidth required for the baseband Transmit two PAM signals using carriers of the same frequency but in phase and quadrature Demodulation: QAM Transmitter Implementation QAM is widely used method for transmitting digital. (To practice further, try DataCamp's Python Data Science Toolbox (Part 1) Course!). To conclude, for a given bit rate, the bandwidth used was divided by m compared to the BPSK modulation, with m being the total number of bits per symbol. If you do not specify the opt parameter, modulate uses a default of opt = pi/(max(max(x))) so the maximum phase excursion is π radians. Python’da 50 Hz’lik bir ta̧sıyıcı sinyali c(t) olu ̧sturun. Deprecated: Function create_function() is deprecated in /home/davidalv/public_html/yhaf. If you are new to MATLAB, please go through our tutorials. However, to what extent immunological and microbial profiles are affected by pregnancy in patients with IBD remains unclear. Objective Pregnancy may affect the disease course of IBD. A “modulation-type label” just means the basic modulation scheme associated with the RF signal, such as binary phase-shift keying (BPSK), Gaussian Minimum-Shift Keying (GMSK), amplitude modulation (AM), etc. In AM modulation mode, a level of 100% simply means 100% modulation: But in FM mode, there is no widely accepted meaning for "100% modulation". June 13, 2019. 3 WARNING This is a Safety Class 1 Product (provided with a protective earth ground incorporated in the power cord). So you’ll use this device by considering your melody line. 0 (clang-600. Pass Filters and Distance. Amplitude Modulation (AM) and Frequency Modulation (FM) in the time domain Connect output of HP8656B signal generator to the oscilloscope (set proper input termination, proper scale, and trigger mode to normal and proper trigger level). local launched python process and restart within an interactive shell, no erratic behaviour. ☞ m(t) A ccosω ct × H(ω) a(t) s(t) Figure 1: SSB. Depending on the position and antenna, the signal can be transmitted up to 100 meters. The receiver will use a differentiator to convert the frequencies to amplitudes, and then an. Currently, I am working on YOLO and SSD and will share my learnings on how they deal with small objects. functions for modulation. The process of demodulation or detection consists in getting back the original modulating voltage from the modulated carrier voltage. By: Mark Porubsky. Pages: 176. This Demo gives the user sufficient freedom to vary the values of modulating and carrier signal amplitude and frequency. Modulation is achieved by varying the phase of the basis functions depending on the message symbols. The Overflow Blog More than Q&A: How the Stack Overflow team uses Stack Overflow for Teams. the AM radio band (broadcast band) is legally designed from 535 KHz to 1605KHz. The camera has a slot for a micro sd card. Analog and Digital Modulation Toolkit for Software Defined Radio python sees each block as a separate class defined in Amplitude modulation changes the amplitude of a radio wave in concert AM Modulation We can see that an upconversion mixer is a natural amplitude modulator If the input to the mixer is a baseband signal A(t), then the output is. This book, an essential guide for understanding the implementation aspects of a digital modulation system, shows how to simulate and model a. unit8_t data should be sent and received by the receiver. Figure 1 shows the graphical settings of a function generator modulated by a sine wave with AM Depth of 100% and AM Frequency of 100 Hz. Set up python env install homebrew on macos install python3 with homebrew: brew install python3 install python modules with pip3, pip3 is included in python3: pip3 install requests 0. • We will assume that the baseband message signal m(t) is band limited with a cutoff frequency W which is less than the carrier frequency ωc. AM (Amplitude Modulation) Amplitude modulation by a carrier sine wave is by far the most common regarding usage. Audio in Python. The message signal is, for example, a function of the sound to be reproduced by a. php on line 143. Powered by Amplitude Modulation (AM) Technology. I also use C++ and Java, often with Python. The reason for using FM for analogue TV sound is that you have enough available bandwidth to give you a better signal to noise ratio with the permitted sound power. See the complete profile on LinkedIn and discover Bagrat’s connections and jobs at similar companies. The AM radio ranges in between 535 to 1705 kHz which is great. I have read the Waveforms SDK Manual and can not figure out the functions provided. Am modulation python. This block implements the method for frequency demodulation presented in the video. Deprecated: Function create_function() is deprecated in /home/davidalv/public_html/yhaf. 37 (2016) 610 P H Charlton et al. When a carrier is amplitude-modulated with a pure sine wave, up to 1/3 (33percent) of the overall signal power is contained in the sidebands. This includes effects like chorus, flanging and phasing, which are pretty standard mixing tools, to tremolo and vibrato, which are used mostly on guitars, synths and electric pianos. systems, amplitude modulation and demodulation, angle modulation and demodulation, spectra of AM and FM, Super heterodyne receivers, circuits for analog communications, information theory, entropy, mutual information and channel capacity theorem, Digital communications, PCM, DPCM, digital modulation schemes, amplitude, phase and frequency shift. It is so …. The Raspberry Pi can be used pretty much for any conceivable application - as well as a radio transmitter. As shown in the figure above, In amplitude modulation, the strength or the amplitude of the carrier wave, which is generally a sinusoid, is varied or modulated by the baseband signal before transmission. This is a convenient approach to FM demodulation, because it allows us to benefit from envelope-detector circuitry that has been developed for use with amplitude modulation. Below is an example. On generator set the carrier frequency to 0. Pulse Width Modulation In the PWM technique, we produce a square wave with a controllable duty cycle. You will find that when the input signal is zero, the period is 100ns (corresponding to 10MHz). As in previous post, mraa library is used for handling the GPIO. AM = 14756¶ Amplitude modulation. A severe, sometimes fatal respiratory disease has been observed in captive ball pythons ( Python regius ) since the late 1990s. Creates a Quadrature Amplitude Modulation (QAM) Modem object. The start of the. Introduction Your JR Python Radio includes the following items: II. Multiplies y by a sinusoid of frequency fc and applies a fifth-order Butterworth lowpass filter using filtfilt. In the process of Amplitude Modulation or Phase Modulation, the modulated wave consists of the carrier wave and two sidebands. The Overflow Blog More than Q&A: How the Stack Overflow team uses Stack Overflow for Teams. I am again making generous use of my earlier disclaimer: this is over-simplified and skips many details! This is only to get the concept. Yet many still consider their use simply in terms of amplitude modulation alone. Modulation refers to an external signal that varies the sound of an instrument or vocal in volume, timing or pitch. 3 WARNING This is a Safety Class 1 Product (provided with a protective earth ground incorporated in the power cord). A modulating wave, which in theory could be another sine wave, typically at a lower audio frequency is superimposed upon the. If you are new to MATLAB, please go through our tutorials. Deprecated: Function create_function() is deprecated in /home/davidalv/public_html/yhaf. Like Amplitude modulation, an ASK is also linear and sensitive to atmospheric noise, distortions,propagation conditions on different routes in PSTN, etc. Rhino + Grasshopper: Modulation Tutorial shows how to use a conditional statement to vary the tile patterning on a surface according to a variable such as sun angle. As I mentioned earlier, this is possible only with numpy. So, the power required for transmitting an AM wave is 1. Demodulators prototyped in GNU Radio Companion can be turned into plugins with very little additional code. It was designed to use auto-vectorization available in gcc, and also has some functions optimized with inline assembly for ARM NEON to. As the Python language is very common in the Raspberry Pi based projects so I am using the Python to write the code for controlling the angular position of the shaft of the Servo motor. Communication Communication is the act of transmission and reception of information. 2017-01-28 – Install PMTK3 in Matlab / GNU Octave. For example C major and G major share four chords in common: C, Em, G, and Am. org item tags). Next up, we will build a simple AM receiver for broadcast band and shortwave reception. The yellow trace shows the envelope spectrum obtained with the Phase Modulation Extension; the high frequency resolution unveils small-scale standing-wave effects in the optical path. AM Current dhiabi. The modulation frequency in this case is around 4Hz. Python supports modules and packages, which encourages program modularity and code reuse. The waveforms module provides many useful functions. But I can't remember what design we used -- the one that was the modem or the one that at least one physics professor said was technically not a modem (but install one in my office too _PLEASE_) In those. APPENDIX A Frequency Considerations for Telemetry 1. It's not just the Hams who are turning green after reading this article, but any competent RF engineer… The point is that generating the modulation (AM, FM, SSB, etc) in software is relatively easy, but building a clean wide-band RF synthesizer is extraordinarily difficult. This project uses Python script, which uses RPi. In QPSK, modulation is symbol based,where one symbol contains 2 bits. Thus, if m(t) is the message signal and c(t)=Acosw c t then AM signal F(t) is written as. Audio Steganography Methods. Here in this blog, let us explore some advanced modulation features in modern function generators. Close • Posted by just now. In AM, the amplitude of the carrier. For example, if you set the carrier frequency to 1 kHz , an index of 10% will increase the output frequency to 10% faster (1. The XR Series was designed from the ground up for high reliability and industry leading 2U (3. The easiest way to install them all (and then some) is to download and install the wonderful Sage package. AM: Amplitude Modulation. • Creates python code to drive GNU Radio. 2019 pass out student. A learner-friendly, practical and example driven book, Digital Modulations using Python gives you a solid background in building simulation models for digital modulation systems in Python version 3. 2 Examples - Modulation The following examples show AM waveforms for different modulation indices. DSP Tricks: Frequency demodulation algorithms January 10, 2011 Embedded Staff An often used technique for measurement of the instantaneous fre­quency of a complex sinusoidal signal is to compute the derivative of the signal's instantaneous θ( n ) phase as shown in Figure 13-60 below. ON_BOARD_MEMORY_EMPTY = 10235¶ Transfer data to the device only when there is no data in the onboard memory of the device. the modulation depth was 100%. Compared quantities include modulation transfer function, photoelectron count, and signal-to-noise ratio. 1 Abstract This document gives an introduction to the IQ-demodulation format of the RF-data stored from the Vingmed System Five. what is the difference betweeen FM and FSK modulation?. Now I am back to scientific programming at a company that would buy me a MATLAB license, but I am sticking with Python because it does everything I need and helps me maintain my Python skills. Pulse amplitude modulation is the basic form of pulse modulation. First, install the python-vxi11 library by running pip install python-vxi11. Bu sinyali c vektörü̈nde tutun, plot komutuyla çizdirin. The Raspberry Pi can be programmed by using various programming platforms. Python’da 50 Hz’lik bir ta̧sıyıcı sinyali c(t) olu ̧sturun. Any one of these chords can be used to transition smoothly from C major to G major. From radio waves to packets with software defined radio. I have ran across a few but some are not maintained/updated. This is the code for calculating solid angle C, surface pressure ps, and field pressure pf coming. The amplitude or strength of the high frequency carrier wave is modified in accordance with amplitude of the message signal. Let us see how AM differs from FM. In that way, users can use changes of frequency to carry speech information. the IFFT is a method of modulating the data onto the carriers, plus summing up of all harmonics to get the transmitted signal. Amplitude modulation is waste of bandwidth. A Practical Implementation of the Faster R-CNN Algorithm for Object Detection (Part 2 - with Python codes) Pulkit Sharma, November 4, 2018. Another method to have feel of wide variation in modulation index with FM modulation , to measure low and high frequency component from display of SM5074 or SM5075 at very low frequency. Abstract: Following the work of the AM Working Group (AMWG) for the UK Institute of Acoustics (IOA), a method for the quantification of amplitude modulation from wind turbines has been proposed. The results are shown in the figures below for all the four different waveforms. I have taught Assembly Language programming of Intel-compatible chips as well as PC hardware interfacing. In this example we use the Hilbert transform to determine the amplitude envelope and instantaneous frequency of an amplitude-modulated signal. The intention of this method is for a consistent and repeatable measure of the modulation depth characteristics of wind farm noise, where that measure. Design Faecal and serum samples were collected from 46 IBD patients (31 Crohn’s disease (CD) and 15 UC) and 179 healthy. Diagnostic. In the process of Amplitude Modulation or Phase Modulation, the modulated wave consists of the carrier wave and two sidebands. A pivot chord is a chord that both keys share in common. version) EOF. From radio waves to packets with software defined radio. Pulse Code Modulation Based on the sampling theorem Each analog sample is assigned a binary code oAnalog samples are referred to as pulse amplitude modulation (PAM) samples The digital signal consists of block of n bits, where each n-bit number is the amplitude of a PCM pulse. Generally speaking, this is AM demodulation. Use of I and Q allows for processing of signals near DC or zero frequency. Functions in Python. My use cases are for plan optimization, and calcing 3D dose from 2D. Thus, if m(t) is the message signal and c(t)=Acosw c t then AM signal F(t) is written as. Impact of. We can use the above formula to calculate the power of AM wave, when the carrier power and the modulation index are known. Innovic India Pvt. The modulation amplitude was equal to the average value, i. Python, SQL, Excel, Matlab, pandas, numpy, scikit-learn, NLTK, Beautiful Soup. This block implements the method for frequency demodulation presented in the video. The amplitude or strength of the high frequency carrier wave is modified in accordance with amplitude of the message signal. I have taught Assembly Language programming of Intel-compatible chips as well as PC hardware interfacing. Now, remember that if you are given a signal, you can measure the periods in the AM signal and calculate those frequencies: For the carrier frequency: measure the period of the fast oscillation. Collector modulation method is the example of High level Amplitude Modulation. I have built a text editor where u can type your programs,I don't know how to give the python support so that when python program is typed the intendation is taken care of after colon etc things. Wave modulation Python Sun Jun 10, 2012 2:02 pm I'm not really good in programming in Python, i'm just programming in it for a month or so and I want to know if there are some ways to edit and create sound in Python. Search python code amplitude modulation, 300 result(s) found python gets real download address multi-threaded wget download Based on python , parsing out real broadcast addresses for Tencent video, download, uses multi-threading, wegt downloads. In both the methods, the carrier wave is modified in order to transmit data, information or sound. • The carrier frequency is the frequency to which the radio receiver is tuned for station selection. Frequency-shift keying (FSK) is the frequency modulation system in which digital information is transmitted through the discrete frequency change of a carrier wave. signal functions. In AM, the amplitude of the carrier. Amplitude Modulation (am) - 327978 Practice Tests 2019, Amplitude Modulation (am) technical Practice questions, Amplitude Modulation (am) tutorials practice questions and explanations. A big “thank you” to the developers! A big “thank you” to the developers! In this tutorial, the Neurosky Mindwave will be used to display brain activity by controlling an LEDs intensity with a Raspberry Pi. Pulse Amplitude Modulation. In AM modulation mode, a level of 100% simply means 100% modulation: But in FM mode, there is no widely accepted meaning for "100% modulation". Amplitude modulation (AM) is a modulation technique used in electronic communication, most commonly for transmitting information via a radio carrier wave. ESP32 PWM with Arduino IDE (Analog Output) In this tutorial we’ll show you how to generate PWM signals with the ESP32 using Arduino IDE. Python for Beginners with Hands-on Project. I also use C++ and Java, often with Python. 450 forms the first of a two-course sequence on digital communication. 2019 pass out student. 5 times the carrier power. provide the first evidence that formant perception in non-speech sounds is improved by F0 modulation. Here we present. The increment value phinc is also passed on to the phase program and can be used inside the pulse. frequency offset. where are parameters of the sinusoidal carrier wave, is called the modulation index (or AM index), and is the amplitude modulation signal. Learn the practical implementation of faster r cnn algorithms for object detection with python codes. Xspresskit 473P Python 3-button Remote Python 460hp I highly can't help but recommend , and some members also recommend. for quadrature modulation and demodulation. DC offset is a mean amplitude displacement from zero. The same commands from the programmers manual can be used in MATLAB, Labvie, C, and many other languages. The symbol used to get the modulo is percentage mark i. The Arduino Uno has six pins set aside for PWM( digital pins 3, 5, 6, 9, 10, and 11). 92Mhz with AM modulation and 38400 sampling rate. Modulation Index of AM Signal 20. Build AM NRSC masks for SIGLENT SSA3000X/SVA1000X using a Python script Download PDF Many broadcast applications require monitoring a transmitter and observing the output amplitude vs. In Python, the modulo ‘%’ operator works as follows: The numbers are first converted in the common type. April 21, 2016 — 10:27. AM = 14756¶ Amplitude modulation. The Raspberry Pi is an amazing single board computer (SBC) capable of running Linux and a whole host of applications. models_dict_in for in-sample and am. The spectrum and bandwidth of a amplitude modulated signal are determined by the sidebands that are generated when amplitude modulation is applied to the carrier. Modulation is achieved by varying the phase of the basis functions depending on the message symbols. Hello fellow Python coders, I'm trying to build a simple FM synthesizer in Python. Therefore, the transmission of the carrier wave is waste of power. ON_BOARD_MEMORY_EMPTY = 10235¶ Transfer data to the device only when there is no data in the onboard memory of the device. More work ahead; all suggestions and criticisms appreciated, use the issues tab. Could you help me modify this MATLAB code for AM with carrier case. Pulse Width Modulation In the PWM technique, we produce a square wave with a controllable duty cycle. We will use a Red Pitaya board which has 4 slow analog outputs. A big “thank you” to the developers! A big “thank you” to the developers! In this tutorial, the Neurosky Mindwave will be used to display brain activity by controlling an LEDs intensity with a Raspberry Pi. 5 and even decrease to 0. The demodulation. Sehen Sie sich auf LinkedIn das vollständige Profil an. A servo motor is a rotary or linear actuator that allows for precise control of angular or linear position, velocity and acceleration. When a sound level rises and falls over time, it is said to be modulated. PILOT/SIGNATURE PATTERN BASED MODULATION TRACKING 6. Actually, AM modulation creates two "copies" of the original signal, one at 2000-3000Hz band and another at 3000-4000Hz. Frequency Modulation (FM): the frequency of the carrier varies in accordance to the information signal 3. 130-230 pm, 725 Sutardja Dai Hall, x2-9193. A servo motor is a rotary or linear actuator that allows for precise control of angular or linear position, velocity and acceleration. If you specify opt, demod subtracts scalar opt from x. Amplitude modulation is waste of bandwidth. Amplitude Modulation (AM) and Frequency Modulation (FM) in the time domain Connect output of HP8656B signal generator to the oscilloscope (set proper input termination, proper scale, and trigger mode to normal and proper trigger level). The frequency-jump procedure is a frequency modulation technique between two discrete frequencies at the inflection points at both sides of the NV ODMR resonance, which yields a signal proportional to the temperature shift over a wide temperature range. The yellow trace shows the envelope spectrum obtained with the Phase Modulation Extension; the high frequency resolution unveils small-scale standing-wave effects in the optical path. Python’da 50 Hz’lik bir ta̧sıyıcı sinyali c(t) olu ̧sturun. the Gremlin written is always of the same general construct making it possible for users to move between development languages and TinkerPop-enabled graph technology easily. Andrew ay may 5 mga trabaho na nakalista sa kanilang profile. Vinauger et al. My name is Sahand Behnam, an Entrepreneur, Consultant and Technologist. signal import hilbert , chirp. Python: PLSDR is written entirely in Python, a comparatively slow interpreted language. In this article, we will take a look at some of the popular methods to embed “secret” text, images and audios inside a “public” sound file. This is great for demonstration purposes! One thing I'm curious about; however, is that it appears that the frequency of the carrier is modulated by the slope of the base-band signal, rather than by it's amplitude. I wrote a simple Python snake game which is about 250 lines of code. values and we have the value of modulation index. Svalgaard’s data shows a significant increase in accumulated solar energy beginning during the 1700’s and continuing through and after the end of the Little Ice Age i. Hybrid screening (also known as “cross modulated screening” and by many other names) places the miniscule FM dots on a regularly spaced AM grid. They also provide a mechanism for the convergent evolution of vibrato-like F0 modulation in nonhuman mammals and suggest that selection pressures to transmit information encoded by formants are key drivers of mammal vocal diversity. This book, an essential guide for understanding the implementation aspects of a digital modulation system, shows how to simulate and model a.
proofpile-shard-0030-384
{ "provenance": "003.jsonl.gz:385" }
# Estimate Your Heart Rate for a Given Running Power. Oct 25, 2021 Power Tool now includes a heart rate calculator from the running power. It allows to visualize the heart rate of a power maintained during 4 minutes. 4 minutes corresponds to a prolonged effort and can be extrapolated over much longer period. For shorter times, the heart rate for a certain power is difficult to represent. Indeed, a short acceleration creates a non-immediate variation (partly also caused by the sensor) and it depends in principle on: • Power before acceleration • The latency time of the organism • The duration of the acceleration • Of its intensity All this complicates things… The calculation is made from a model whose parameters are determined during the running session. The model1 used is: $$HR = a + b \times Power + c \times \log( Power + 1 ) + d \times Var( Power )$$ a, b, c and d are calculated in such a way as to minimize the difference with the set of measurements made during the races. Power is the power held for 4 minutes and Var (Power) is its variance. In principle, if the pace is very regular Var (Power) can be considered as nul. # Usage ## Setup The first thing to do is to feed the algorithm with data. To do this, it is not necessary to follow a specific calibrarion procedure. Following a classic training plan is sufficient. We can consider that the analysis begins to give sufficiently precise results after a mix of two or three easy runs, two or three intervals and two or three fartlek. Basically after two weeks of a workout plan the computation should be reliable. The first 10 minutes are not taken into account because during the warm-up, the heart rate is generally not very significant. The algorithm weight the newest data to ignore progressively the oldest one. This permits to stay accurate with the current fitness. Do not forget to think about the quality of the measurements made. The wrist heart rate monitor is unreliable at due to its high latency, and in my case, it tends to produce spikes when not placed correctly. In this case, it is better to use a cardio belt to avoid erroneous measurements. For the power, it is less critical but I still recommend to deactivate the algorithm in trail runs. During a trail run, the effort is sometime very irregular. I have the impression that in this situation the Stryd sensor is not really delivering accurate data. And also when walking, I’m not sure if the power is as precise as when running. That said, you should not be paranoid, few measurement errors have no impact on the final result, you just have to avoid systematic and excessive errors. ## Interface Once the running application (with the Power Tool data field) has started, a notification with link to the analysis is popup on the phone. The dotted curve corresponds to the situation of a race where the power is maintained in a constant manner. The other curve is the one with an average power variation (here 23 watts). In general, for a session at constant speed, we can assume power variation is close to zero. For the 30/30 interval, we can consider that the variation in power is the half of the difference between the intense phase and the recovery phase. (So for 180w recovery and 250w sprints we would have a variation of (250-180) / 2 = 35w) Here an example of the result. # Determination the vVO2max The vVO2max, velocity at maximal oxygen uptake, is nothing more than the speed that can be maintained at its maximum oxygen consumption (VO2max). Generally speaking, oxygen consumption during a race is correlated with heart rate. The more you ventilate, the more the heart rate increases to bring oxygen to the muscles. Determining your vVO2max then consists in finding your speed at the maximum frequency that you can run. This is in general, the maximum heart rate. Thanks to the power/heart rate curve, it is easy to determine its maximum aerobic power, then deduce the pace on for it and convert it into a speed. This speed is precisely the vVO2max. It turns out that I had done a Conconi test 6 months ago and I got a pace of 4:30min/km or 13.3km/h. My maximum heart rate is 179 pulses, using the calculator I find that it corresponds to a power of 231w maintained for 4 minutes. 231w corresponds to a track speed of 4:29min/km. We see that the two results are consistent. The advantage is that you can determine your vVO2max without having to do a Conconi or Couper test. Just follow your training plan and watch after a few sessions find the power at the maximum heart. # Power Zones Another possible application of heart rate knowledge for a given power is in the definition of the power zones. Stryd defines his power zones from the critical power. The limits are only a fixed percentage of it. Zone Range Easy 65%-80% CP Moderate 80%-90% CP Threshold 90%-100% CP Interval 100%-115% CP Repetition >115% CP This definition of zones is purely arbitrary and I find it difficult to to interpret. To be clear, if I run with friends and we start a long 4 km climb at 115% CP, what conclusion can I made? I will hold on without collapsing? This situation is a classic in trail running. If now, we decide to replace the limit of this zone by the power at maximum frequency maintained for at least 4 minutes, I have an information much more relevant. I know then that if I’m over this limit, I won’t be able to maintain the pace only for for one or two minutes. And exactly at this limit, I am be able to do run approximately ten minutes. Also, since that the critical power is the power that can be hold over one hour, we have then a second threshold which corresponds to an effort that we manage to hold at least ten minutes. I think it simplifies analysis during the effort. We are then left with the following areas: Zone Range Easy 65%-80% CP Moderate 80%-90% CP Effort less than one hour 90%-100% CP Effort less than ten minutes 100% CP - 100% HR Intense effort >100% HR 1. For add ional information: Heart Rate Estimation From Running Power. ↩︎ Sharing is caring!
proofpile-shard-0030-385
{ "provenance": "003.jsonl.gz:386" }
# Electronic – Designing voltage summer without op-amp mathoperational-amplifierresistance The above circuit is from Operational Amplifier Adder. Now remove feedback resistor and op-amp, and get the output. From the answer of the question, I know how the end result for Vout would be, assuming load resistance exists. But now suppose that there are N inputs, and there is some load resistance RL. Can anyone show me a simple way or equation of quantifying how load resistance makes the sum result diverge from the ideal value? (The question comes from the fact that adding parallel resistances involve ||, which when expressed into final output takes a lot of time to expand and I still do not know whether there exists a general formula for any N inputs.) The output voltage is $$\begin{split} V_{\text{OUT}} &= R_1||R_2||..||R_n||R_L \cdot \sum{(V_i/R_i)} \\ &= \biggl(R_1||R_2||..||R_n\biggr)||R_L \cdot \sum{(V_i/R_i)} \\ &= \frac {R_K R_L}{R_K+R_L}\cdot \sum{(V_i/R_i)} \end{split}$$ where \$R_K = R_1||R_2||..||R_n\$ If you consider the ideal output voltage is when \$R_L = \infty \$, then $$\frac {V_{\text{OUT}}}{V_{\text{IDEAL}}} = \frac {R_L}{R_K + R_L}$$ To look at it intuitively, \$R_K\$ is simply the Thévenin equivalent source resistance of the divider without the load resistor.
proofpile-shard-0030-386
{ "provenance": "003.jsonl.gz:387" }
Attenuation Jump to navigation Jump to search A beam of radiant energy going through the air will have some of that energy absorbed, or removed from the beam by making the radiation no longer exist, and some of it scattered, or removed from the beam by making the radiation go in a different direction. Together, these make up the attenuation of the radiant energy, or removal of energy from the beam. The Beer-Lambert law Numerically, the attenuation is represented by an attenuation length ${\displaystyle R_{0}}$, which is the distance at which the intensity of the light (or beam power, or pulse energy) has been reduced to approximately 37% of its original value. If you start off with an intensity ${\displaystyle I_{0}}$, or a power ${\displaystyle P_{0}}$, or an energy ${\displaystyle E_{0}}$, then the intensity ${\displaystyle I}$, power ${\displaystyle P}$, and energy ${\displaystyle E}$ after a given range ${\displaystyle R}$ will be ${\displaystyle {\begin{matrix}I&=&I_{0}\,\exp[-R/R_{0}]\\P&=&P_{0}\,\exp[-R/R_{0}]\\E&=&E_{0}\,\exp[-R/R_{0}].\end{matrix}}}$ This is called the Beer-Lambert law. [1] [2] [3] If you have more than one physical phenomenon contributing to attenuation (for example, absorption and scattering, or absorption off of oxygen and absorption off of nitrogen), the inverses of the attenuation lengths of each phenomena add together. In particular, if you have scattering with a characteristic scattering length ${\displaystyle R_{s}}$ and absorption with a characteristic absorption length ${\displaystyle R_{a}}$, then ${\displaystyle {\frac {1}{R_{0}}}={\frac {1}{R_{a}}}+{\frac {1}{R_{s}}}.}$ Absorption The first thing to worry about with the air is that it can absorb light. It can absorb some colors of light better than others. Clear air with an Earth-like composition is very transparent to visible light, as well as to nearby invisible colors like near and short-wave infrared, or ultraviolet A, B, and C. But some wavelengths of mid-wave, long-wave, and far infrared are absorbed, and some get through depending on the exact wavelength. Also, any light with a shorter wavelength than ultraviolet-C gets rapidly absorbed by air, hence the name “vacuum ultraviolet” for frequencies higher than UV-C, as they can only propagate in vacuum. If your laser works at a frequency that is absorbed by the air, it will not be very useful in that environment. Even if the absorption length is much longer than the distance to the target, it can still have significant effects on laser performance because absorption is what drives thermal blooming. The absorption length varies a lot depending on the wavelength of the beam, the weather, and the atmospheric conditions. The figures below show the absorption lengths for clean air at sea level without aerosols, for both dry air and only the absorption of water vapor at 1% concentration by volume in that air[4] (corresponding to approximately 60% relative humidity at 15 °C. In humid tropical climates, the water vapor concentration may be as high as 5% by volume; in dry or arctic conditions it may be less than 0.1% by volume). Divide the water vapor attenuation length by the percent concentration by volume of water vapor, and find the total absorption length by taking the inverse of the sum of the inverse absorption lengths for dry air and the water vapor, as described above. Non-Earthlike atmospheres may have very different clean atmosphere absorption. Absorption length of clean sea-level air in the visible and near-visible spectrum Absorption length of clean sea-level air in the near and short-wave infrared Absorption length of clean sea-level air in the mid-wave infrared Absorption length of clean sea-level air in the long-wave infrared Scatter Laser beam made visible by atmospheric scattering. The next thing to worry about is scatter. Rather than simply making the light in your beam go away, scatter is what makes the light go in a different direction. Particulates or aerosols in the air are good at scattering light. This is why we can’t see through fog or clouds. But even perfectly clean air will scatter light to some extent. The electric field in the light will make the electrons in air molecules slosh back and forth at the light’s frequency. And these electrons then act like antennas to radiate light away in different directions (while simultaneously taking that energy away from the beam). This is called Rayleigh scattering, and it is more effective for higher frequency light than lower frequency light. This is why, for example, the sky looks blue - more high frequency blue light from the sun is being scattered into our eyes than lower frequency light of other colors. The Rayleigh scattering length for clean sea level air is [5] ${\displaystyle R_{Rayleigh}=948\,{\mbox{km}}\left[{\frac {\lambda }{1\,\mu {\mbox{m}}}}\right]^{4}}$ for wavelength ${\displaystyle \lambda }$. Note that scatter makes it so you can see the laser beam. Even a 1 watt blue laser shows up clearly in clean air from its Rayleigh scattering. A very powerful visible light laser like you would use for a weapon will give an obvious trace through the air. Estimating attenuation due to aerosols The attenuation length of aerosols such as clouds, fog, smoke, smog, dust, lint, or pollen can be roughly estimated by figuring out how far away you can see before the things you are looking at appear somewhat hazy or washed out. Usually, this attenuation will mostly be scatter, but smoke, dust, pollen, or lint may introduce a substantial amount of absorption as well. Different atmospheres The attenuation of atmospheres of greatly different composition than the Earth – especially in the infrared – goes well beyond the scope of this document. You can make a quick estimate, however, by dividing the attenuation length without aerosols by the planet's atmospheric pressure relative to Earth and then figure the effect of aerosols separately. This method is more accurate the closer the planet's atmospheric composition is to Earth. Unless the atmosphere contains chlorine, nitrogen dioxide, far more ozone than Earth's air, or approaching an atmospheric pressure or more of methane, the air will be nearly transparent to visible light; although the scattering may vary by a factor of two or so depending on the polarizabilities of the air molecules. This can also be used to estimate the attenuation of air on Earth at different altitudes. Shooting through the atmosphere If you are in orbit around a planet and trying to shoot at ground targets, or if you are on the ground trying to shoot at spacecraft in orbit, it helps to be able to estimate the effective thickness of the air. To go from Earth to space or vice versa, you need to shoot through an effective distance of 8.5 km of air at the density of the shooter or target in the air. This distance is called the scale height. For other planets, you can use ${\displaystyle {\mbox{scale height}}=8.5\,{\mbox{km}}\,{\frac {T}{288.15\,{\mbox{K}}}}\,{\frac {9.80\,{\mbox{m}}/{\mbox{s}}^{2}}{g}}\,{\frac {28.96\,{\mbox{g}}/{\mbox{mole}}}{W}}}$ where ${\displaystyle T}$ is the average temperature, ${\displaystyle g}$ is the acceleration due to gravity, and ${\displaystyle W}$ is the average molecular weight of the gases in the air. If you are shooting through the atmosphere at an angle, the distance of all that atmospheric junk you need to go through can be approximated as the scale height divided by the cosine of the angle you are shooting at. The amount of air you have to get through along your beam path is called the air mass, and if you follow the link you will find more accurate methods of approximating it if you need to. Credit Author: Luke Campbell References 1. https://archive.org/details/UFIE003101_TO0324_PNI-2703_000000 Bouguer, Pierre (1729). Essai d'optique sur la gradation de la lumière [Optics essay on the attenuation of light] (in French). Paris, France: Claude Jombert. pp. 16–22. 2. https://archive.org/details/TO0E039861_TO0324_PNI-2733_000000 Lambert, J.H. (1760). Photometria sive de mensura et gradibus luminis, colorum et umbrae [Photometry, or, On the measure and gradations of light intensity, colors, and shade] (in Latin). Augsburg, (Germany): Eberhardt Klett. 3. https://books.google.com/books?id=PNmXAAAAIAAJ&pg=PA78 Beer (1852). "Bestimmung der Absorption des rothen Lichts in farbigen Flüssigkeiten" [Determination of the absorption of red light in colored liquids]. Annalen der Physik und Chemie (in German). 162 (5): 78–88. Bibcode:1852AnP...162...78B. doi:10.1002/andp.18521620505. 4. http://dx.doi.org/10.1016/j.jqsrt.2017.06.038 I.E. Gordon, L.S. Rothman, C. Hill et al., "The HITRAN2016 Molecular Spectroscopic Database", Journal of Quantitative Spectroscopy and Radiative Transfer 203, 3-69 (2017). 5. Calculated using atomic and molecular polarizabilities taken from David R. Lide, "CRC Handbook of Chemistry and Physics: 71st Edition 1990-1991", CRC Press (1990)
proofpile-shard-0030-387
{ "provenance": "003.jsonl.gz:388" }
### X-O__O-X's blog By X-O__O-X, history, 3 weeks ago, This problem is the type of problem where I know everything needed but I still can't solve it. I always learn a lot from such problems I am not interested in a solution, I know it, but how to approach the problem and reach a solution. How to think about it? What should have hinted me to consider using what was needed to solve it ? • +1 » 3 weeks ago, # |   +16 The name of the problem says it all234G - Тренировки • » » 3 weeks ago, # ^ | ← Rev. 2 →   +1 How to practice ? Are all ways of practice equally good ? Will any kind of practice pay off ? When I try to think about a problem which I didn't know how to solve to locate my gaps, aren't this practice ?I just hate when someone answer "Practice" Wow, thanks genius \ This is the most stupid answer to a question, I have ever read. • » » » 3 weeks ago, # ^ |   0 » 3 weeks ago, # |   0 My Thought process СпойлерOkay so, basically we need two groups such that matching between them is maximum possible. СпойлерIt makes sense to make the size of the groups as equal as possible so that max number of edges may exist between them. СпойлерOkay, so if we do that, every player of one group will already have played with all the players of the other group and now should play against the players of his own group. Hence the problem is divided in $\left \lceil{\dfrac{n}{2}}\right \rceil$ and $\left \lfloor{\dfrac{n}{2}}\right \rfloor$ individual problems. Divide and Conquer Now I just need to merge, Since the matches are held in parallel, I guess the number of matches will be max of left and right, + 1. СпойлерOh come on, I also need to print now. Okay, I guess Matches are held at each level of recursion. Hence I can store them level wise in a vector. СпойлерThe maximum depth should be $\left\lceil{\log_{2}(n)}\right\rceil$. But lets just create a vector of size 1000. • » » 3 weeks ago, # ^ |   +12 • » » » 3 weeks ago, # ^ |   0 Plagiarism Inspiration • » » 3 weeks ago, # ^ |   0 Can you implement this solution ? • » » » 3 weeks ago, # ^ |   0
proofpile-shard-0030-388
{ "provenance": "003.jsonl.gz:389" }
#jsDisabledContent { display:none; } My Account | Register | Help # Modulo-N code Article Id: WHEBN0005184821 Reproduction Date: Title: Modulo-N code Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date: ### Modulo-N code Modulo-N code is a lossy compression algorithm used to compress correlated data sources using modulo arithmetic. ## Contents • Compression 1 • Decompression 2 • Example 3 ## Compression When applied to two nodes in a network whose data are in close range of each other Modulo-N code requires one node (say odd) to send the coded data value as the raw data M_o = D_o; the even node is required to send the coded data as the M_e = (D_e) mod (N) . Hence the name Modulo-N code. Since it is known that for a number K, at least log_2(K) bits are required to represent it in binary. So the modulo coded data of the two nodes requires totally log_2(M_o) + log_2(M_e). As we can generally expect log_2(M_e) \le log_2(M_o) always, because M_e \le N. This is the how compression is achieved. A compression ratio achieved is C.R = \frac{log_2(M_o) + log_2(M_e)}{2log_2(M_o)}. ## Decompression At the receiver by joint decoding we may complete the process of extracting the data and rebuilding the original values. The code from the even node is reconstructed by the assumption that it must be close to the data from the odd node. Hence the decoding algorithm retrieves even node data as CLOSEST(M_o,N.k+ M_e). The decoder essentially finds the closest match to M_o \simeq N.k + M_e and the decoded value is declared as N.k + M_e ## Example For a mod-8 code, we have Encoder D_o=43,D_e=47 M_o=43,M_e=47 mod(8) = 7, Decoder M_o=43,M_e=47 mod(8) = 7, D_o=43,D_e=CLOSEST(43,8.k + 7) 43 \simeq 8.5 + 7 D_o=43,D_e=47 Modulo-N decoding is similar to phase unwrapping and has the same limitation: If the difference from one node to the next is more than N/2 (if the phase changes from one sample to the next more than \pi), then decoding leads to an incorrect value.
proofpile-shard-0030-389
{ "provenance": "003.jsonl.gz:390" }
Today I got another complaint in a row of complaints of my Jabber contacts, arguing that they can’t send me messages although my account seems to be online in their buddy list. That happens when I put my notebook to sleep, this time I got informed by Micha. Here are 3 steps to patch this problem, dealing with gajim-remote, PowerManagement-Utils and DBus. This annoying events happens when I was online with my notebook and close the lid so the notebook goes sleeping. Unfortunately my Jabber client Gajim doesn’t notice that I’m going to disconnect and so the Jabber server isn’t informed about my absence. Due to connection instabilities the server waits some time of inactivity until it recognizes that there is really no more client before it tells all my friends I’m gone. During this time I appear online but messages are not able to reach my client, so they are lost in hell. That sucks, I know, and now I’ve reacted. First of all I checked how to tell Gajim to disconnect via command line and found the tool gajim-remote , it comes with Gajim itself. Here are some examples of using it: Ok, so far, next task is to understand what is done when the lid is closed. The task to suspend or hilbernate is, at least in my case, done by pm-utils (PowerManagement-Utils). It comes with some tools like pm-suspend or pm-hibernate and so on. To tell these tools to do something before respectively after suspending there is a directory in /etc/pm/sleep.d . Here You can leave some script that look like those in /etc/init.d/* . Here is a smart example now located in /etc/pm/sleep.d/01users on my notebook, you can use it as skeleton: Make it executable and give it a try. It checks for each logged-in user whether there is a .suspend or .awake in its $HOME to execute it before suspending respectively after resuming. Next step is telling Gajim to change its status. Unfortunately the gajim-remote script is speaking to the running Gajim-instance via DBus. You may have heard about DBus, there are two main options of DBus buses: system- and session-bus. To speak to Gajim you use the session DBus and need the bus address. That is a problem, this address is acquired while your X-login, and you don’t know it from a remote session or if the system executes scripts while suspending. So if you just try to execute gajim-remote change_status offline in your .suspend you’ll get an error like D-Bus is not present on this machine or python module is missing or Failed to open connection to "session" message bus . Your DBus session address within an X-session is set in your environment in $DBUS_SESSION_BUS_ADDRESS ( echo $DBUS_SESSION_BUS_ADDRESS ). So what are your options to get this address for your .suspend script? • You can export your env to a file when you login (maybe automatically via .xinitrc ) to parse it • All addresses are saved in $HOME/.dbus/session-bus/ , so try to find the right one.. • Get it from a process environment The last possibility is of course the nicest one. So check if Gajim is running and extract the DBUS_SESSION_BUS_ADDRESS from /proc/GAJIM_PID/environ ! Here is how it can be done: That’s it, great work! Save this file in $HOME/.suspend and give it the right for execution. You can also write a similar script for $HOME/.awake to reconnect to your Jabber server, but you eventually don’t want to reconnect each time you open the lid.. So the next time I close my laptops lid Gajim disconnects immediately! No annoyed friends anymore :P
proofpile-shard-0030-390
{ "provenance": "003.jsonl.gz:391" }
# Kripke semantics for modal propositional logic A Kripe model for a modal propositional logic PL${}_{M}$ is a triple $M:=(W,R,V)$, where 1. 1. $W$ is a set, whose elements are called possible worlds, 2. 2. $R$ is a binary relation on $W$, 3. 3. $V$ is a function that takes each wff (well-formed formula) $A$ in PL${}_{M}$ to a subset $V(A)$ of $W$, such that • $V(\perp)=\varnothing$, • $V(A\to B)=V(A)^{c}\cup V(B)$, • $V(\square A)=V(A)^{\square}$, where $S^{\square}:=\{s\mid\;\uparrow\!\!s\subseteq S\}$, and $\uparrow\!\!s:=\{t\mid sRt\}$. For derived connectives, we also have $V(A\land B)=V(A)\cap V(B)$, $V(A\lor B)=V(A)\cup V(B)$, $V(\neg A)=V(A)^{c}$, the complement of $V(A)$, and $V(\diamond A)=V(A)^{\diamond}:=V(A)^{c\square c}$. One can also define a satisfaction relation $\models$ between $W$ and the set $L$ of wff’s so that $\models_{w}A\qquad\mbox{iff}\qquad w\in V(A)$ for any $w\in W$ and $A\in L$. It’s easy to see that • $\not\models_{w}\perp$ for any $w\in W$, • $\models_{w}A\to B$ iff $\models_{w}A$ implies $\models_{w}B$, • $\models_{w}A\land B$ iff $\models_{w}A$ and $\models_{w}B$, • $\models_{w}A\lor B$ iff $\models_{w}A$ or $\models_{w}B$, • $\models_{w}\neg A$ iff $\not\models_{w}A$, • $\models_{w}\square A$ iff for all $u$ such that $wRu$, we have $\models_{u}A$, • $\models_{w}\diamond A$ iff there is a $u$ such that $wRu$ and $\models_{u}A$. When $\models_{w}A$, we say that $A$ is true at world $w$. The pair $\mathcal{F}:=(W,R)$ in a Kripke model $M:=(W,R,V)$ is also called a (Kripke) frame, and $M$ is said to be a model based on the frame $\mathcal{F}$. The validity of a wff $A$ at different levels (in a model, a frame, a collection of frames) is defined in the parent entry (http://planetmath.org/KripkeSemantics). For example, any tautology is valid in any model. Now, to prove that any tautology is valid, by the completeness of PL${}_{c}$, every tautology is a theorem, which is in turn the result of a deduction from axioms using modus ponens. First, modus ponens preserves validity: for suppose $\models_{w}A$ and $\models_{w}A\to B$. Since $\models_{w}A$ implies $\models_{w}B$, and $\models_{w}A$ by assumption, we have $\models_{w}B$. Now, $w$ is arbitrary, the result follows. Next, we show that each axiom of PL${}_{c}$ is valid: • $A\to(B\to A)$: If $\models_{w}A$ and $\models_{w}B$, then $\models_{w}A$, so $\models_{w}B\to A$. • $(A\to(B\to C))\to((A\to B)\to(A\to C))$: Suppose $\models_{w}A\to(B\to C)$, $\models_{w}A\to B$, and $\models_{w}A$. Then $\models_{w}B\to C$ and $\models_{w}B$, and therefore $\models_{w}C$. • $(\neg A\to\neg B)\to(B\to A)$: we use a different approach to show this: $\displaystyle V((\neg A\to\neg B)\to(B\to A))$ $\displaystyle=$ $\displaystyle V(\neg A\to\neg B)^{c}\cup V(B\to A)$ $\displaystyle=$ $\displaystyle(V(\neg A)\cap V(\neg B)^{c})\cup V(B)^{c}\cup V(A)$ $\displaystyle=$ $\displaystyle(V(A)^{c}\cap V(B))\cup V(B)^{c}\cup V(A)$ $\displaystyle=$ $\displaystyle(V(A)^{c}\cup V(B)^{c})\cup V(A)=W.$ In addition, the rule of necessitation preserves validity as well: suppose $\models_{w}A$ for all $w$, then certainly $\models_{u}A$ for all $u$ such that $wRu$, and therefore $\models_{w}\square A$. There are also valid formulas that are not tautologies. Here’s one: $\square(A\to B)\to(\square A\to\square B)$ ###### Proof. Let $w$ be any world in $M$. Suppose $\models_{w}\square(A\to B)$. Then for all $u$ such that $wRu$, $\models_{u}A\to B$, or $\models_{u}A$ implies $\models_{u}B$, or for all $u$ such that $wRu$, $\models_{u}A$, implies that for all $u$ such that $wRu$, $\models_{u}B$, or $\models_{w}\square A$ implies $\models_{w}\square B$, or $\models_{w}(\square A\to\square B)$. Therefore, $\models_{w}\square(A\to B)\to(\square A\to\square B)$. ∎ From this, we see that Kripke semantics is appropriate only for normal modal logics. Below are some examples of Kripke frames and their corresponding validating logics: 1. 1. $A\to\square A$ is valid in a frame $(W,R)$ iff $R$ weak identity: $wRu$ implies $w=u$. ###### Proof. Let $(W,R)$ be a frame validating $A\to\square A$, and $M$ a model based on $(W,R)$, with $V(p)=\{w\}$. Then $\models_{w}p$. So $\models_{w}\square p$, or $\models_{u}p$ for all $u$ such that $wRu$. But then $u\in V(p)$, or $u=w$. Hence $R$ is the relation: if $wRu$, then $w=u$. Conversely, suppose $(W,R)$ is weak identity, $M$ based on $(W,R)$, and $w$ a world in $M$ with $\models_{w}A$. If $wRu$, then $u=w$, which means $\models_{u}A$ for all $u$ such that $wRu$. In other words, $\models_{w}\square A$, and therefore, $\models_{w}A\to\square A$. ∎ 2. 2. $\square A$ is valid in a frame $(W,R)$ iff $R=\varnothing$. ###### Proof. First, suppose $\square A$ is valid in $(W,R)$, and $M$ a model based on $(W,R)$, with $V(p)=\varnothing$. Since $\models_{w}\square p$, $\models_{u}p$ for any $u$ such that $wRu$. Since no such $u$ exists, and $w$ is arbitrary, $R=\varnothing$. Conversely, given a model $M$ based on $(W,\varnothing)$, and a world $w$ in $M$, it is vacuously true that $\models_{u}A$ for any $u$ such that $wRu$, since no such $u$ exists. Therefore $\models_{w}\square A$. ∎ A logic is said to be sound if every theorem is valid, and complete if every valid wff is a theorem. Furthermore, a logic is said to have the finite model property if any wff in the class of finite frames is a theorem. Title Kripke semantics for modal propositional logic KripkeSemanticsForModalPropositionalLogic 2013-03-22 19:33:22 2013-03-22 19:33:22 CWoo (3771) CWoo (3771) 30 CWoo (3771) Definition msc 03B45 ModalLogic
proofpile-shard-0030-391
{ "provenance": "003.jsonl.gz:392" }
# SUB2r ### Site Tools isp:white_balance # Differences This shows you the differences between two versions of the page. isp:white_balance [2019/05/09 00:54]Igor Yefmov [Practical example] isp:white_balance [2019/05/09 00:54]Igor Yefmov [Color correction for white temperature] Both sides previous revision Previous revision 2019/05/09 00:55 Igor Yefmov [What is white color temperature] 2019/05/09 00:54 Igor Yefmov [Color correction for white temperature] 2019/05/09 00:54 Igor Yefmov [Practical example] 2019/05/09 00:53 Igor Yefmov [Color correction for white temperature] 2018/06/04 09:10 Igor Yefmov created 2019/05/09 00:55 Igor Yefmov [What is white color temperature] 2019/05/09 00:54 Igor Yefmov [Color correction for white temperature] 2019/05/09 00:54 Igor Yefmov [Practical example] 2019/05/09 00:53 Igor Yefmov [Color correction for white temperature] 2018/06/04 09:10 Igor Yefmov created Last revision Both sides next revision Line 10: Line 10: For our purposes, we are using individual color channel **gains** to compensate for a given temperature. Lower temperature "white light" needs a lot of blue added to it and very little red and as the temperature climbs up, the amount of added red grows while the added blue goes down. For our purposes, we are using individual color channel **gains** to compensate for a given temperature. Lower temperature "white light" needs a lot of blue added to it and very little red and as the temperature climbs up, the amount of added red grows while the added blue goes down. ===== Color correction for white temperature ===== ===== Color correction for white temperature ===== - For the calibration purposes we have acquired a Philips "Hue White and Color Ambiance A19 LED Starter Kit" that allowed us to test various illumination scenarios for a range of color temperatures. For a given white color temperature setting we have dialed the red and blue gains to make the scene "​white"​ (leaving the green gain at its constant value of ''​1024''​). + For the calibration purposes we have acquired a Philips "Hue White and Color Ambiance A19 LED Starter Kit" that allowed us to test various illumination scenarios for a range of color temperatures. For a given white color temperature setting we have dialed the red and blue gains to make the scene "​white"​ (leaving the green gain at its constant value of $1024$). Corrections to the red channel were way more noticeable than those to the blue one so we approximated the blue gains' graph with a single line, described by the formula $B = 4205.4 - T*0.4087$. Corrections to the red channel were way more noticeable than those to the blue one so we approximated the blue gains' graph with a single line, described by the formula $B = 4205.4 - T*0.4087$.
proofpile-shard-0030-392
{ "provenance": "003.jsonl.gz:393" }
## Motivation and Background There is a difference between connector and resource in midPoint. Connector is a piece of code that is used to connect to a particular class of systems (e.g. "any LDAP server"). Resource is a specific system (e.g. "OpenLDAP server ldap.example.com"). Obviously, every resource has to tell which connector it is supposed to use to reach that particular system. As both resources and connectors are midPoint objects this is quite easy. Resource contains object reference to the connector: `connectorRef`. Therefore when midPoint need to reach a particular resource it will fetch resource object from the repository, look at the connectorRef reference, look up referred connector object in the reposity and that connector object points to the connector code. The trouble is, that connector objects (ConnectorType) are not entirely ordinary objects. These objects refer to the connector code that resides in midPoint servers or connector servers. Connector objects are usually generated automatically by midPoint servers during server startup. At that time midPoint looks for all the connectors that are locally available to that server. This essentially means looking for JAR files that contain connector code. When a connector is discovered then the server creates a new connector object (ConnectorType). And this automatic generation of connector objects is kind of a trouble. Because those objects will have unpredicatable OIDs. Therefore resource definitions (ResourceType) cannot have fixed OIDs in their `connectorRef`s. Fortunately, midPoint has a mechanism how to resolve a reference dynamically when an object is imported. Therefore the usual practice is that the resourceRef does not contain fixed OID. It contains a search filter instead. The search filter is executed when resource definition is imported to midPoint. The filter looks up appropriate connector object and fills in the OID. ## Bundled and Unbundled Connectors The Identity Connectors are quite independent of the midPoint releases. Some connectors are bundled with midPoint. Which means that they are part of midPoint code. Those are the connectors that are almost always used in any midPoint deployment. Such as LDAP, Active Directory or CSV connector. Most of the connectors are unbundled. This means they are not distributed with midPoint. These connectors must be downloaded separately and installed in midPoint home directory or in a connector server. The connectors are not bundled with midPoint for many reasons. One of the reason is that many connectors are rarely used or even outright exotic. Bundling them all would make midPoint distribution too big. There are connectors that require third-party libraries to work and we simply cannot bundle them at all. But unbundling has also a positive side: connector may have their own lifecycle. You can upgrade the connectors without upgrading midPoint itself. So you can use upgraded connectors that provide required features or fix outstanding bugs. When a connector is upgraded, then new connector object (ConnectorType) is created for new connector version. This is quite natural thing to do. ConnId framework can support several versions of the same connector running at the same time. And this is a very nice feature, as it allows gradual upgrades. E.g. this is especially nice for deployments that have 100 resources with the same connector. We definitely do not want to change the connector version for all of them at once. The new connector version may have some changes that can break existing resources. So it make sense test the new connector version only on a couple non-critical resources. And roll out the upgrade only if the new connector is tested. This method works well with unbundled connectors. Just install a new version of the connector in addition to the old one. The test the new version and change connector references on all resources. Then remove the old connector version. However, it is somehow different for bundled connectors - especially if midPoint itself is upgraded. In that case the old connector version suddenly disappears and there is a new connector version as a replacement. However, all the resources are pointing to the old connector versions, which is no longer there. Filters in dynamic `connectorRef` references are not evaluated again as the resource were not re-imported. Nothing has really changed as far as midPoint itself is concerned. However, the references suddenly point to nowhere and the resources do not work. There are several strategies how to deal with bundled connector upgrades: 1. Be proactive and explicitly deploy old connector version to midPoint home directory as part of the upgrade process. Therefore upgraded midPoint detects both old and new connector versions. And the resources will work. Then there is sufficient time to gradually upgrade the resources. 2. Fix the references before any damage happens. Suspend all the synchronization and propagation tasks before the upgrade. Stop all user access to midPoint (e.g. by disabling load balancer). Upgrade midPoint and start the server. The resources will not work at this moment, but there is nothing that would attempt to work the resources right now. Use that moment to update connectorRef references. Then resume the tasks and let users in. 3. Use the power of platform subscription to let us implement some smarter strategy. There are may possibilities how midPoint can be improved. E.g. we can support runtime or "upgrade-time" dynamic reference resolution. Or implement smart references that points to "latest connector version". Or have some kind of post-upgrade process to handle the references. There are many possibilities. Note that midPoint will not delete the old connector object even if the connector code disappears and midPoint can no longer detect the connector. This may sound strange, but it is in fact an important safety feature. The connector code may have disappeared by mistake, e.g. midPoint home directory is restored from an earlier backup than the database backup. Therefore there are connector objects, but connector code is missing. In that case we do not want to delete the connector objects. Remember, connector object OIDs are generated. If we would delete the objects that it will be very difficult to restore the connector references. Especially if many custom connectors are used. And there are too many ways how such mistakes and various corner cases can happen - and the impact is usually quite bad. Therefore we have decided not to delete connector objects automatically. It is no big trouble to do it manually anyway. A connector is not upgraded every day. #### Backup First Before doing anything, backup your resource in Configuration -_ Repository Objects_ - Resource, click on the resource and copy/paste the data to a file. Then write down your existing connector version. Go to Configuration - Repository objects - Connectors or click Resources and check your resource details. For this page, assume that you have "ICF org.identityconnectors.ldap.LdapConnector v1.1.0.0-e1" LDAP connector installed. ## Future MidPoint can do all kind of smart things with resources and connectors. But most of the configuration needs to be done with XML/JSON/YAML files. The user interface support for connector and resource configuration is somehow limited. And this seems to be perfectly acceptable for many midPoint users. However, there is always a possibility to extend midPoint user interface. Especially some user interface to manage the connectors and assist connector upgrades would be useful. Evolveum offers subscription programs that can be used to fill in missing midPoint functionality.
proofpile-shard-0030-393
{ "provenance": "003.jsonl.gz:394" }
## anonymous 3 years ago Simplify the expression. 1. anonymous 2. anonymous |dw:1352995908539:dw| 3. anonymous 4. anonymous 5. anonymous This is what I simplified it down to. You may be able to factor the numerator and denominator more, but that is very complex, and probably not expected by your teacher... $\frac{ -5x^5-2x^4+5x^2+2x }{ x^6 + 2x^4-2x^3+1 }$ -Dylan 6. anonymous 7. anonymous thansks @ ajprincess 8. ajprincess |dw:1352996750150:dw||dw:1352996917107:dw| 9. ajprincess Is it clear? Any doubts? @anas23
proofpile-shard-0030-394
{ "provenance": "003.jsonl.gz:395" }
# Markov chain,transition matrix and jordan form If i have a transition matrix $$P= \begin{bmatrix} \frac12 & \frac14 & \frac14 \\ 0 & \frac12 & \frac12 \\ 0 & 0 & 1 \end{bmatrix}$$ i know that it's not diagonalizable,so if i want to compute $P^n$ i have to use the jordan normal form. My teacher said that we can write $P^n=UD^nU^{-1}$ where $$D^n= \begin{bmatrix} \frac1{2^n} & 2n\frac1{2^n} & 0 \\ 0 & \frac1{2^n} & 0 \\ 0 & 0 & 1 \end{bmatrix}$$ I don't understand how i can compute $D^n$. After that , i know that the element $P^{(n)}_{ij}=\alpha_1\lambda_1+...+\alpha_n\lambda_n$ iff eigenvalues have geometric molteplicity equal to 1, but here for example teacher writes: i.e. $$P^{(n)}_{ij}=\alpha\frac{1}{2^n}+\beta\frac{1}{2^n}2n+\gamma$$ why? • Thinking in terms of paths of Markov chains readily yields $$P^n= \left(\begin{array}{ccc} \frac1{2^n} & \frac{n}{2^{n+1}} & 1-\frac1{2^n} -\frac{n}{2^{n+1}} \\ 0 & \frac1{2^n} & 1-\frac1{2^n} \\ 0 & 0 & 1 \end{array}\right).$$ – Did Jan 21 '16 at 15:42 • Hope you don't mind, I edited your post to add square brackets around the matrices :) – Ant Jan 22 '16 at 17:07 The result is easy to prove by induction once it has been shown to you, so let's focus on how to find these powers on your own. The point of the Jordan Normal Form of a square matrix is clearly revealed by its geometrical interpretation. Each of its blocks, of dimensions $k\times k$, corresponds to a subspace on which the matrix acts as an endomorphism. On each such subspace it is the sum of a homothety $\lambda \mathbb{I}_k$ and a nilpotent transformation $N$. Moreover, it is so arranged that a basis $(e_1, e_2, \ldots, e_k)$ can be found in which $$N:e_{j+1}\to e_j\tag{*}$$ for $j=1, 2, \ldots, k-1$ and $N(e_1)=0$. Because $\lambda\mathbb{I}_k$ commutes with $N$, this makes it easy to find powers of $D = \lambda\mathbb{I}_k + N$, since 1. Repeated application of $(*)$ immediately shows that for $i \ge 1$, $N^i(e_{j+i}) = e_j$ for $j=1, 2, \ldots, k-i$ and $N^i(e_j) = 0$ for $j \le i$ and 2. The Binomial Theorem asserts $$(\lambda \mathbb{I}_k + N)^n = \sum_{i=0}^n \binom{n}{i} \lambda^{n-i} N^i.$$ (1) guarantees that $N^k = N^{k+1} = \cdots = 0$: that's what it means to be nilpotent and it's the reason why this form is so convenient. In the example $D$ has two blocks of dimensions $2$ and $1$. The $k=1$ block acts trivially. The $k=2$ block has $\lambda=1/2$. Its matrix in the basis $(e_1, e_2)$ therefore is $$D_2 = \pmatrix{\frac{1}{2} & 1 \\ 0 & \frac{1}{2}} = \frac{1}{2}\pmatrix{1 & 0 \\ 0 & 1} + \pmatrix{0 & 1 \\ 0 & 0} = \frac{1}{2}\mathbb{I}_2 + N.$$ Consequently $N^2 = 0$, whence for any positive integral power $n$, $$D_2^n =\sum_{i=0}^n \binom{n}{i} \left(\frac{1}{2}\right)^{n-i} N^i = \left(\frac{1}{2}\right)^n + \binom{n}{1} \left(\frac{1}{2}\right)^{n-1} N + 0 + 0 + \cdots + 0.$$ In terms of the basis $(e_1, e_2)$ the matrix of $D_2^n$ therefore is $$D_2^n = \frac{1}{2^n}\pmatrix{1 & 0 \\ 0 & 1} + \binom{n}{1}\frac{1}{2^{n-1}}\pmatrix{0 & 1 \\ 0 & 0} = \pmatrix{\frac{1}{2^n} & 0 \\ 0 & \frac{1}{2^n}} + \pmatrix{0 & n \frac{1}{2^{n-1}} \\ 0 & 0}.$$ That's algebraically equivalent to the formula in the question.
proofpile-shard-0030-395
{ "provenance": "003.jsonl.gz:396" }
AP® Calculus AB-BC Free Version Difficult APCALC-EBKKEI A line tangent to the graph of $f(x)={ ax }^{ 2 }+bx +12$ has its point of tangency at $x=1$. If the tangent line has a slope of $-4$ and also contains the point $(0,\,11)$, which of the following equations could represent $f(x)$? A $f(x)=x^2-6x+12$ B $f(x)=-x^2+6x+12$ C $f(x)={ -3x }^{ 2 }+2x+12$ D $f(x)=3x^2 -2x + 12$
proofpile-shard-0030-396
{ "provenance": "003.jsonl.gz:397" }
# What is the concentration of hydrogen ions in a solution of pH = 4.0? Jun 17, 2017 $\text{concentration of H"^+ = "0.0001M}$ #### Explanation: "pH" = -log("concentration of H"^+) $\mathmr{and} , {10}^{-} {\text{pH" = "concentration of H}}^{+}$ From the given info $\text{pH}$ = 4 $\therefore {10}^{-} 4 = 0.0001$ $\text{concentration of H"^+ = "0.0001M}$
proofpile-shard-0030-397
{ "provenance": "003.jsonl.gz:398" }
# Gravitational potential energy ## What is Gravitational potential energy? Lifting an object in a gravitational field transfers energy into the objects gravitational energy store. Gravitational potential energy is the energy an object has due to its height above Earth. The equation for gravitational potential energy is GPE = mgh, where m is the mass in kilograms, g is the acceleration due to gravity (9.8 N/m2 on Earth), and h is the height above the ground in meters. ## Gravitational potential energy equation To calculate Gravitational potential energy we use this equation. $E_p = mgh$ ## Gravitational potential energy demo In this tutorial you will learn how to calculate the energy stored in a moving object. ## Chilled practice question A barrel is lifted onto a shelf 3.5 m from the ground. The barrel has a mass of 22 Kg. Calculate the energy in its G.P.E store. ## Frozen practice question A ski lift transfers 11 KJ of energy into a mans G.P.E energy store. The man has a mass of 55 Kg, calculate the height he was elevated. ## Science in context Lifting an object in a gravitational field transfers energy into the objects gravitational energy store. ## Have you entered Millies monthly giveaway? Check out Millies page to find out how to win a prize or check out the shop to buy Fridge Physics merch. ### Millies Merch Giveaway Please register to view this content. ### Merch Fridge Physics has baseball caps, woollen beanies, hoodies and polo shirts in various colours and sizes. Free delivery anywhere in the UK. Scroll to Top
proofpile-shard-0030-398
{ "provenance": "003.jsonl.gz:399" }
# [R] How to convert backslash to slash? Duncan Murdoch murdoch at stats.uwo.ca Wed Sep 24 15:20:05 CEST 2008 Shengqiao Li wrote: > On Wed, 24 Sep 2008, Duncan Murdoch wrote: > > >> Shengqiao Li wrote: >> >>> On Tue, 23 Sep 2008, Duncan Murdoch wrote: >>> >>> >>> >>>> On 23/09/2008 4:00 PM, Shengqiao Li wrote: >>>> >>>> >>>>> How to use sub, gsub, etc. to replace "\" in a string to "/"? >>>>> >>>>> For example,convert "C:\foo\bar" to "C:/foo/bar". >>>>> >>>>> >>>> If those are R strings, there are no backslashes in the first one. It has >>>> a formfeed and a backspace in it. >>>> >>>> >>> I did notice that this string was special. It's a legimate R string. If >>> "f" and "b" are replaced by "d", it will not. >>> >> I didn't say it was not legitimate, I said that it contains no backslashes. >> If you replace f or b with d, you do not have a legitimate string. >> >>> My purpose is to convert a Windows file path (eg. copied from Explorer >>> location bar) to a R file path through some R function inside R terminal. >>> The "File->Change dir..." takes a file path like "C:\Acer", but setwd >>> function will fail. >>> >> That's not true. If you enter a backslash in the string, setwd() works fine. >> >> Your problem is that you are confusing R source code with the strings that it >> represents. The R source code for the file path C:\Acer is "C:\\Acer". The >> R source code "C:\foo\bar" contains no backslashes, it contains the >> characters C, :, formfeed, o, o, backspace, a, r. >> >> If you have the string C:\Acer in the Windows clipboard, then you can read it >> from there using readClipboard(). (There are many other ways to read the >> clipboard as well; >> using 'clipboard' as a filename generally works.) You can then pass it to >> setwd(), and it will be fine. >> >
proofpile-shard-0030-399
{ "provenance": "003.jsonl.gz:400" }
IIT JAM Follow May 8, 2021 10:44 pm 30 pts in this question whats my mistake pls once tell. .. . . . . . . . .. . . . .. . . .. . . . • 0 Likes • Shares • Gattu uday Remember in this type of cases do one thing ,dim U =no. of variables - no . of restrictions.... nd det = 5 is here soo rank of A is 3