id
stringlengths
22
42
metadata
dict
text
stringlengths
9
1.03M
proofpile-shard-0030-100
{ "provenance": "003.jsonl.gz:101" }
# linearKdot 0th Percentile ##### Multitype K Function (Dot-type) for Linear Point Pattern For a multitype point pattern on a linear network, estimate the multitype $$K$$ function which counts the expected number of points (of any type) within a given distance of a point of type $$i$$. Keywords spatial, nonparametric ##### Usage linearKdot(X, i, r=NULL, …, correction="Ang") ##### Arguments X The observed point pattern, from which an estimate of the dot type $$K$$ function $$K_{i\bullet}(r)$$ will be computed. An object of class "lpp" which must be a multitype point pattern (a marked point pattern whose marks are a factor). i Number or character string identifying the type (mark value) of the points in X from which distances are measured. Defaults to the first level of marks(X). r numeric vector. The values of the argument $$r$$ at which the $$K$$-function $$K_{i\bullet}(r)$$ should be evaluated. There is a sensible default. First-time users are strongly advised not to specify this argument. See below for important conditions on $$r$$. correction Geometry correction. Either "none" or "Ang". See Details. Ignored. ##### Details This is a counterpart of the function Kdot for a point pattern on a linear network (object of class "lpp"). The argument i will be interpreted as levels of the factor marks(X). If i is missing, it defaults to the first level of the marks factor. The argument r is the vector of values for the distance $$r$$ at which $$K_{i\bullet}(r)$$ should be evaluated. The values of $$r$$ must be increasing nonnegative numbers and the maximum $$r$$ value must not exceed the radius of the largest disc contained in the window. ##### Value An object of class "fv" (see fv.object). ##### Warnings The argument i is interpreted as a level of the factor marks(X). Beware of the usual trap with factors: numerical values are not interpreted in the same way as character values. ##### References Baddeley, A, Jammalamadaka, A. and Nair, G. (to appear) Multitype point process analysis of spines on the dendrite network of a neuron. Applied Statistics (Journal of the Royal Statistical Society, Series C), In press. Kdot, linearKcross, linearK. • linearKdot ##### Examples # NOT RUN { data(chicago) K <- linearKdot(chicago, "assault") # } Documentation reproduced from package spatstat, version 1.56-1, License: GPL (>= 2) ### Community examples Looks like there are no examples yet.
proofpile-shard-0030-101
{ "provenance": "003.jsonl.gz:102" }
chapter  19 4 Pages ## The Higgs Mechanism We have seen in Problem 30 that the four-Fermi interaction in good approximation can be written in terms of the exchange of a heavy vector particle. In lowest order we have resp. the diagrams in Figures 19.1(a) and 19.1(b). The first diagram comes from a four-fermion interaction term that can be written in terms of the product of two currents Jµ J µ, where Jµ = γµ. Here each fermion line typically carries its own flavour index, which was suppressed for simplicity. Figure 19.1(b) can be seen to effectively correspond to − J˜ µ(−k) ( gµν − kµkν/M2 k2 − M2 + iε ) J˜ ν(k). (19.1) At values of the exchanged momentum k2 M2, one will not see a difference between these two processes, provided the coupling constant for the fourFermi interactions [Figure 19.1(a)] is chosen suitably (see Problem 30). This is because for small k2, the propagator can be replaced by gµν/M2, which indeed converts eq. (19.1) to J µ Jµ/M2. It shows that the four-Fermi coupling constant is proportional to M−2, such that its weakness is explained by the heavy mass of the vector particle that mediates the interactions. Examples of four-Fermi interactions occur in the theory of β-decay, for example, the decay of a neutron into a proton, an electron and an antineutrino. In that case the current also contains a γ 5 (Problem 40).
proofpile-shard-0030-102
{ "provenance": "003.jsonl.gz:103" }
# converting decimal to hex This topic is 4606 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi anyone, i have a little problem with converting a decimal to a hexadecimal number. i know how it works in theory. i´ve written some code and it doesn´t work and i have searched the net for some example, but i haven´t found something useful. (i´m at work at the moment so i don´t can post the code now) so my question: is there a function in c/c++ to convert a decimal to hex (i have an example with printf, but it didn´t work) or do you have some piece of code to do it? thanx in advance. greetz TheMatrixXXX ##### Share on other sites Keep in mind that both "decimal" and "hex" are really display interpretations of the same underlying data: what's contained in memory is a sequence of bits (the number in binary), which won't be affected by "changing bases". What you can do is tell the output stream to use hex rather than decimal when it encodes numeric values for output. This is done with a stream manipulator in C++, e.g.: #include <iostream>using namespace std;int main() { cout << hex << 123 << endl; // output: 7b} Here std::hex is the necessary stream manipulator. In C, you can try playing with the %x or %p (IIRC) format specifiers for printf (and related functions). ##### Share on other sites sorry, i really didn´t thought that way, damn i always thought that it was a stupid question ;) thanx greetz TheMatrixXXX
proofpile-shard-0030-103
{ "provenance": "003.jsonl.gz:104" }
## 3D Eddy-Current Computation Using Krylov Subspace Methods • This paper considers the numerical solution of a transmission boundary-value problem for the time-harmonic Maxwell equations with the help of a special finite volume discretization. Applying this technique to several three-dimensional test problems, we obtain large, sparse, complex linear systems, which are solved by using BiCG, CGS, BiCGSTAB resp., GMRES. We combine these methods with suitably chosen preconditioning matrices and compare the speed of convergence. $Rev: 13581$
proofpile-shard-0030-104
{ "provenance": "003.jsonl.gz:105" }
# Help for Voltage regulator LM338 with current limitation of 5A #### romain514 Joined Dec 16, 2015 18 Hi all, I want to do a voltage regulator with a transformer that gives me 24VAC@6A and a LM338. But since the LM338 can support 5A I would like to limit the current to 5A. I did a little circuit here and placed the resistor R4 (0.24Ohm) where i think it would limit the current. I also have a potentiometer to regulate the voltage from 1.25V to 30V. Can anybody help me and tell me if my circuit is good and if it would function as it is? By the way: I'm a noob in electronics so please be patient we me! Thanks all #### dl324 Joined Mar 30, 2015 12,871 Putting a current sense resistor there will affect voltage regulation. Better to put it up stream of the regulator and clamp the output to 1.2V for over current. Or just let the regulator protect itself. #### Dodgydave Joined Jun 22, 2012 9,956 No it wont work, Your transformer is drawn wrong, also the series resistor wont limit the current it will just drop the voltage. #### alfacliff Joined Dec 13, 2013 2,458 the transformer rating of 6 amps is no problem, the voltage regulator has internal regulation. and current limiting. most problems are caused by insuficient heatsink or too high input voltage causing overheating. also, the 24 volt transformer should be changed to a 24 volt center tapped transformer with the center tap grounded instead of the four diode bridge shown, the voltage to the regulator will be less and the heat generated will be less too. Last edited: #### romain514 Joined Dec 16, 2015 18 @ DogyDave: I know that the drawing for the transformer is not good, as i didnt find the good part in eagle for it. But my transformer is a Triad Magnetics F-260U. Here you can see it on Digikey: https://www.digikey.ca/product-detail/en/F-260U/237-1950-ND/5032185 Can somebody tell me how should I do this circuit? Or like you guys say the LM338 as an internal current limiter? I'm not sure of that but I just want to be sure that the current will never be more than 5A. On the LM338 Datasheet it says that it can peek to 7A so thats why im asking questions. Thanks #### AnalogKid Joined Aug 1, 2013 9,260 A single 338 can be configured as a constant voltage regulator or a constant current regulator, but not both. To do what you want, have one 338 set up as a traditional adjustable voltage regulator. Then, in front of that, put a current limiter circuit. This can be another 338 configured differently, or a power transistor with some parts around it. ak #### ScottWang Joined Aug 23, 2012 7,025 Choosing an AC 0-12V-20V transformer, using a switch to set two ranges of power source: 1. Vo_Lo = 12Vac*1.414 = 17 Vdc, used for output voltages 0~12V 2. Vo_Hi = 20V*1.414 = 28.3 Vdc, used for output voltages 0~24V #### romain514 Joined Dec 16, 2015 18 will that work? @AnalogKid I will try to make a circuit with what you said and put it here to ask you again if its good. Thanks! #### #12 Joined Nov 30, 2010 18,222 AnalogKid hit where I was aiming. You can't regulate voltage and current at the same time, with only one active component. They will fight. You must choose. Is current limiting so almighty important that you can't trust the chip to be accurate enough? Usually, it isn't. Besides that, the current regulator in the chip is sensitive to temperature so most people run into problems with heat. That is why ScottWang suggested a tapped transformer. Any LM338 that can have 30 volts across it can not pass 5 amps because it will overheat and shut down on internal safety. Good start kid, but you came here for fine tuning, and it's coming at you. Probably harder than you would expect, but you showed enough quality to get good, solid answers. We've seen so many amateurs completely flummoxed that we wouldn't treat them that way. We would be tippy-toeing around some VERY basic concepts because that's as fast as they could learn. You scored about 98%. Excellent for a first try! #### #12 Joined Nov 30, 2010 18,222 VR1 R5 R2 You forgot to connect your control loop back to the adjust pin. The control loop is dominant. Q2 only comes into the equation when you hit maximum current. Only then does Q2 dump the signal from the control loop. #### AnalogKid Joined Aug 1, 2013 9,260 Actually, I like Dave's circuit in post #6. The only advantage to having two power devices as I described is that the peak regulator power dissipation that happens when current limiting kicks in is spread out across two parts. ak #### romain514 Joined Dec 16, 2015 18 @AnalogKid here is my new design following what you've said can you tell me if that works? thanks View attachment 96686 #### MikeML Joined Oct 2, 2009 5,444 @romain514, have you done a heatsink calculation. Unless you lower the input voltage to the regulator by closely matching the transformer secondary voltage to the drop-out voltage of the regulator, your LM338 will be dissipating some where between 25 and 100W, which requires either a huge heatsink and/or forced air cooling with a blower/fan! The current limiting intrinsic inside the LM338 is described on the Data Sheet: here. #### #12 Joined Nov 30, 2010 18,222 Add another D2 across LM1 by connecting from C2 to C5 and change VR1 to 2200 ohms. Somebody is going to have to do the transformer section for you because you look lost. Edit: MikeML did it better in the next post. Last edited: #### MikeML Joined Oct 2, 2009 5,444 The two cascaded LM338 (one for current limiting, other for voltage regulation) is a bad way to go. It more than doubles the power dissipation because each regulator must have enough head room not to drop-out, plus 1.25V drop across the current-sensing resistor. The better way is right off the TI LM338 datasheet, which shows the 2N2222 reducing the V(adj) as a function of the current in the 0.2Ω sensing resistor. Obviously, its value would be adjusted to get 5A. Rs = 0.6/5 = 0.12Ω. Ps = 0.6*5 = 3W This sensing resistor only drops ~0.6V, so wastes less power, and the input voltage, at the minimum of the filter capacitor ripple only has to be 24V + V(do) (Vdropout), which is about 2.7V @ 5A. Last edited: #### romain514 Joined Dec 16, 2015 18 @MikeML is there anything to add on that circuit? can you verify it for me please? thank you very much. And by the way how can i will regulate the voltage here? Anybody else has anything to add or suggest? thank you everyone! #### dl324 Joined Mar 30, 2015 12,871 Ground needs to be between R3 and R4. R3 needs to be a pot. You may need a small additional load, minimum current for LM317 is 10mA and LM338 may also have one. #### AnalogKid Joined Aug 1, 2013 9,260 The two cascaded LM338 (one for current limiting, other for voltage regulation) is a bad way to go. It more than doubles the power dissipation. Don't think so. For a given transformer output voltage, load voltage, and load current, the power dissipated in the entire regulator system is a constant no matter how it is divided up (pass devices, sense resistors, whatever). This assumes that there is enough headroom for two series devices, but that was pretty clear in post #1. The better way is right off the TI LM338 datasheet, which shows the 2N2222 reducing the V(adj) as a function of the current in the 0.2Ω sensing resistor. That's essentially the same as Dave's circuit in #6. ak #### #12 Joined Nov 30, 2010 18,222 And here we have 3 ways to do the transformer. Notice the different wattage rating of the two diode design. The four diode design uses the transformer more efficiently and that translates into . There is about 1/2 volt difference in the voltage at the first capacitor. Well within safe range. #### Attachments • 21.7 KB Views: 27
proofpile-shard-0030-105
{ "provenance": "003.jsonl.gz:106" }
Math Help - integrating factor 1. integrating factor dy/dx + 2y = sin x In this equation what is the integrating factor? Is it e^2x? 2. Originally Posted by geton dy/dx + 2y = sin x In this equation what is the integrating factor? Is it e^2x? Finally I’ve done & this equation is really awesome. Problem resolved 3. Originally Posted by geton Finally I’ve done & this equation is really awesome. Problem resolved
proofpile-shard-0030-106
{ "provenance": "003.jsonl.gz:107" }
# Time as a function of distance for constant a 1. May 8, 2007 ### DaveC426913 I want to make a little calculator that will - take a distance (such as, say, 20.7 light years) and, - with a given acceleration (say, 1g) followed by deceleration, spit out the time elapsed. I don't know how to fold Lorentz' velocity function into the formula for t as a function of d and a. 2. May 8, 2007 3. May 8, 2007 ### Meir Achuz $$d=[\sqrt{1+a^2t^2}-1]/a].$$ Solve for t. 4. May 8, 2007 ### JesseM Do you want the time elapsed by clocks on the ship, or in the frame where the ship was at rest at the beginning and end of the journey? 5. May 9, 2007 ### DaveC426913 Yes! (Both) And nertz to me for neglecting to say so up front... 6. May 9, 2007 ### DaveC426913 As Jesse wisely asks, is that ship time? or Earth time? 7. May 9, 2007 ### JesseM Just look at the page robphy pointed you to, all the terms are defined, and the equations you need are there. They say: So you want to find the equations on that page which give you t or T if you already know d and a. Those would be: t = sqrt[(d/c)^2 + 2d/a] and T = (c/a) ch-1 [ad/c^2 + 1] These equations are for continuous uniform acceleration, so if you want to accelerate for half the trip and then decelerate for the second half, that's equivalent to double the time it would take to accelerate uniformly for half the distance. So the equations would become: t = 2 * sqrt[([d/2]/c)^2 + 2d/a] and T = 2 * (c/a) ch-1 [a(d/2)/c^2 + 1] ch-1 is the inverse hyperbolic cosine function, acosh. Here is an online calculator which understands the function acosh(x), but if you want to program your own calculator, this page says that $$acosh(x) = log[x + \sqrt{x^2 - 1}]$$. Last edited: May 10, 2007 8. May 9, 2007 ### DaveC426913 OK, so I kind suck at advanced math, making me the world's worst physics geek. So I made a connect-the-dots graph from the data they provided. See attached. Four whole data poiints... But it gets me a number within 10%. So, a 20.7ly journey is actually two symmetrical 10.35ly journeys, and a 10.35ly journey at 1g shows 3.2 years elapsed shiptime. Thus, the whole journey lasts 6.4 years shiptime, while by Earth-time the journey lasts just a little more than 21 years. Cool. File size: 23.1 KB Views: 93 9. May 9, 2007 ### DaveC426913 On an loosely-related note, that page clears up some things I was never quite sure about in relativistiic travel. It's a staple of sci stories that have long space journeys where they return to an Earth that's a zillion years older than when they left. I'd never quite gotten the idea of whether something weird is happening to Earth-time, or ship-time or both. As if, by virtue of their trip, Earth-time had gone into overdrive and they had "missed" all those years. But I'm beginning to see that even the ship's occupants can be sure that Earth's clock is "the right one" (forgive me Einstein). A spaceship is scheduled for Star X, which everyone agrees is 83ly away. The spaceship occupants, once they get up to speed, see that their measurement of the distance to Star X is not fixed - it is not a journey of 83 light years, it is a journey of merely 5 years. Yet when they get to Star X, they see that Sol is once again 83ly away. So, even *they* agree that the journey should - and did - take 83 years from a PoV stationary to both Sol and Star X. So I can lay to rest my confusion. A journey of 100ly takes ~100 years - by Sol time AND by Star X time. The ship occupants acknowledge that it was they who slowed to a crawl while 83 years elapsed normally. Or looking at it the other way, when the Niven's Rammer did his loop around the galactic core and came back "two million years in Earth's future", he really did make a trip (and one that he could plot and measure on a galactic map) whose duration should have been - and was - 2 million years. Last edited: May 9, 2007 10. May 10, 2007 ### Meir Achuz In the equation I gave, t is the Earth time. 11. May 10, 2007 ### MeJennifer It's all in the proper quantities Good thing! The proper distance, which is the actual distance traveled by a traveler, is always smaller than the distance as measured by a reflecting light beam. So in other words, if the distance between two planets is X then any traveler must record a distance smaller than X. Gives an interesting spin on Zeno's paradox doesn't it? Proper acceleration changes proper distance, which is the reason that travelers traveling between two events can record a different proper time. Last edited: May 10, 2007 12. May 10, 2007 ### JesseM I got 6.1 years and 22.6 years...one thing to remember is that, as noted on the relativistic rocket page, if you're using units of years for time and light-years for distance, then 1G acceleration is 1.03 ly/y^2. Last edited: May 10, 2007 13. May 10, 2007 ### DaveC426913 Wow thanks! That first one works perfectly. The second one I'm trying to massage. This is what I get: n = a*(d/2)/c^2 + 1 m = Math.log(n+Math.sqrt(n*n-1)) Te = 2 * (c/a) * m Unfortunately, that spits out a number in the millions. Last edited: May 10, 2007 14. May 10, 2007 ### JesseM Hmm, try having your program print out n and m as well as Te to give you a better idea of where it's going wrong. If I use your example of d = 20.7, then with a=1.03 and c = 1 I get: n = 11.6605 m = 3.1475 (BTW, make sure you're using the natural log function 'ln' rather than log base 10) Te = 6.11 15. May 10, 2007 ### DaveC426913 Math.log(x) returns the natural log (base E) of x What units is c in? I set it to 300000. a = 1.03 d = 20 c = 300000; n = a*(d/2)/c^2 + 1 ch1 = Math.log(n+Math.sqrt(n*n-1)) Te = 2 * (c/a) * ch1 alert(" Te:" + Te + " n:"+n +" ch1:"+ ch1 ); yields Te:1026843.0140033511 n:3 1.7627417 Last edited: May 10, 2007 16. May 11, 2007 ### JesseM You can use whatever units you want, you just have to be consistent. If you want 1G acceleration, the a = 1.03 figure assumes you're using units of years for time and light-years for distance, in which case c = 1. If you want to use c = 300000, that means you're using units of kilometers for distance and seconds for time, in which case 1G acceleration would mean a = 0.0098 kilometers/second^2. This would mean a distance of 20 kilometers and an acceleration of 1.03 km/s^2 = 105G, which is not what you wanted. But even with these numbers, something seems to be going wrong with your program's math: If n = a*(d/2)/c^2 + 1, then you can just plug this into your calculator to see a*(d/2)/c^2 = 1.03*(10)/(300000)^2 = 1.144 * 10^-10, so n should be very close to 1. Have you checked the programming language's rules for entering equations? Maybe you need to have [d/2] instead of (d/2) of something minor like that. Anyway, I'd suggest entering different simple numbers (like a=1, d=1, c=1) to try to figure out what equation it's actually calculating, or break the equation up into more intermediate variables (like x = d/2, y = a*x, z = y/c^2) which it can print out so you can get a better idea of just where it's going wrong by double-checking the results with a calculator. Last edited: May 11, 2007 17. May 12, 2007 ### DaveC426913 I wish to verify something: This: T = 2 * (c/a) ch-1 [a(d/2)/c^2 + 1] is actually T = 2 * (c/a) * ch-1[a(d/2)/c^2 + 1] Right? 18. May 12, 2007 ### DaveC426913 Nope.I can't reconcile these two statements: T = 2 * (c/a) h(a(d/2)/c^2 + 1) where h(x) = log(x+x^2-1) 19. May 13, 2007 ### JesseM That's right. h(x) is supposed to be log(x + sqrt(x^2 - 1)). So just calculate the value of a(d/2)/c^2 + 1, and let that be x in the second equation. For example, if d = 20.7, a=1.03 and c = 1, then a(d/2)/c^2 + 1 = 1.03*10.35 + 1 = 10.6605 + 1 = 11.6605, so according to this calculator acosh(11.6605) = 3.14751047236, and log(11.6605 + sqrt(11.6605^2 - 1)) = log(11.6605 + 11.6175) = log(23.278) = 3.1475. Last edited: May 13, 2007
proofpile-shard-0030-107
{ "provenance": "003.jsonl.gz:108" }
Does Redbull give you wings? The text of this report is primarily based on the work of Catherine Fan, Sarah Langlois, Molly Meade, Indee Ratnayake, Justine Colvin and Jessica Mackay. They are students majoring in Medical Science on the Cambridge Tradition program. Introduction Caffeine, a bitter, white crystalline xanthine alkaloid, acts as a central nervous system stimulant drug in the human body. It is commonly used in humans as a way to temporarily ward off drowsiness and restore alertness. Caffeine is found in the seeds, leaves and fruits of some plants. Caffeine serves as a natural pesticide for plants because of the effect it has on insects, inhibiting their ability to feed on the plant. In the body, adenosine binds to adenosine receptors, decreasing cell activity. Caffeine competitively binds to adenosine receptors, which prevents the adenosine from binding, and in turn leads to increased neuron firing. Caffeine also constricts blood vessels in the brain. In response to increased neuron firing, the pituitary gland releases adrenaline, which in turn increases the heart rate. Previous studies suggest caffeine reaches peak plasma between 35-45 minutes after ingestion. This trial unravels if the caffeine content of Redbull has any clinical value in decreasing reaction times Aim The aim of this experiment is to explore the effects of a caffeine challenge equal to the dose present in a red bull on the reaction times, heart rates and blood pressure of Cambridge Tradition students aged 16-17 years of age in a randomised controlled trial Methods The class roster for the 2014 Medical Science class was randomised into two equally sized groups. The intervention group recieved 150 ml of orange juice with a caffeine concentration of 53.3 mg/100ml. The placebo group recieved 150 ml of orange juice with 80mg of glycolate. Glycolate is a tasteless, odorless disintegrant, which is available sterile and leaves a similar residue to dissolved caffeine. All students took baseline heart rates, and reaction tests at baseline and around 45 minutes after intervention. For the reaction test, each student held their fingers over the 0 cm mark on a 30cm ruler, and measured where they caught it when released by another student. The test was taken in triplicate and the mean recorded. Reaction test results need to be converted to a temporal measure of reaction speed. We can convert the metres the ruler fell to time using $Time = \sqrt{\frac{2*Distance}{Acceleration}}$, which equates to $Time = \sqrt{\frac{2*Distance}{9.8}}$. A sub sample of 16 students also collected systolic and diastolic blood pressure using an automated cuff at baseline and around 45 minutes after intervention. Tutors note: To enable the teaching of both cross-over and randomised medical trial design, students recieved both the placebo and intervention with a 24 hour washout period between each challenge. Repeated measures within individuals were not accounted for in the results. Results 46 students completed the trial, providing 75 observations. There were no loses to follow up. For 8 observations the student had taken a caffeinated beverage earlier that da Table 1. Change in the two arms Outcome Placebo group Caffeine group Reaction time (s) -0.07 -0.22 Heart rate (bpm) -1.2 2.9 Systolic (mmHg) -2.5 6.4 Diastolic (mmHg) 1.8 5.1 Table 1 indicates changes in HR, reaction time and BP occured in both the placebo and caffeine groups. Reaction time decreased in the caffeine group decreased to a greater degree than the placebo group, while HR and BP increased in the caffeine group. Figure 1. Change in reaction time Figure 1 shows the raw changes in reaction time for the two arms. The spread of observations was much greater in the caffeine group, and there was a stronger tendancy to a decrease in reaction time. Figure 1 also suggests that the raw change may not be an appropriate measure, and if we include the 95% confidence intervals of these values we may be better placed to understand whether a result is statistically significant. Figure 2. Change in reaction time with 95% confidence intervals. Figure 2 shows the spread with grey boxes which represent the 95%CI, which is the interval we believe has 95% confidence of including the true mean for each arm.
proofpile-shard-0030-108
{ "provenance": "003.jsonl.gz:109" }
# Automate mouse clicks with Mathematica I need to automate mouse clicks with Mathematica. However, a similiar question was closed on this forum, without any answer. Consider the following steps: Dynamic[MousePosition[]] In this first step I dynamically show the mouse position. DynamicModule[{col = Green}, EventHandler[ Style["text", FontColor -> Dynamic[col]], {"MouseClicked" :> (col = col /. {Red -> Green, Green -> Red})}]] In this second step I create a "clickable" text: when you click it, it changes its color between green and red. My objective here is twofold: 1) automatically move the cursor position - this has already been answered here. So, following the original answer I can write Needs["JLink"]; ReinstallJava[]; robotclass = JavaNew["java.awt.Robot"]; robotclass@mouseMove[90, 196]; 2) automatically click on the clickable text (and, of course, I want to see its color changing). Here is the code I'm trying to use: Although I believe I'm using the correct functions (which belong to the Java Robot Class), I'm getting the following error message: Java::argx1: Method named mouseRelease defined in class java.awt.Robot was called with an incorrect number or type of arguments. The argument was InputEvent.BUTTON1_MASK. Can anybody give me a hint to solve this problem? You can use the raw integer value 16, but the correct J/Link syntax for the symbolic name is InputEventBUTTON1UMASK, so your code would look like this: (* Call LoadJavaClass when needing to refer to static members. *) robotclass@mousePress[InputEventBUTTON1UMASK];
proofpile-shard-0030-109
{ "provenance": "003.jsonl.gz:110" }
# Riemann zeta function form for Dirichlet series of lcm[m,n]−s[m,n]^{-s} Define f(k)=|{(m,n)∈N2:[m,n]=k}|f(k) = |\{(m, n) \in \mathbb{N}^2 : [m,n] = k\}| and F(s)=∞∑i=1f(i)isF(s) = \sum_{i=1}^\infty \frac{f(i)}{i^s} where s=σ+it∈C.s= \sigma + it \in \mathbb{C}. Write FF in Reimann zeta function form and determine its half-plane of absolute convergence. I would like to study this problem. First, I notice that F(s)=∞∑i=1f(i)is=∞∑m=1∞∑n=11[m,n]s=∑m>1∞∑n=1[m,n]−s+∞∑i=11is=∑m>1,n=1,2,3…[m,n]−s+ζ(s).F(s) = \sum_{i=1}^\infty \frac{f(i)}{i^s} =\sum_{m = 1}^\infty \sum_{n=1}^\infty \frac{1}{[m, n]^{s}} = \sum_{m>1}\sum_{n=1}^\infty [m, n]^{-s} + \sum_{i=1}^\infty \frac{1}{i^s} = \sum_{m>1, n= 1,2,3…} [m, n]^{-s} + \zeta(s). For m=2m = 2, ∞∑n=11[m,n]s=12s+12s+16s+…=12s+∞∑k=11(2k)s+∞∑k=31(2k)s=2∞∑k=11(2k)s−14.\sum_{n=1}^\infty \frac{1}{[m, n]^s} = \frac{1}{2^s} + \frac{1}{2^s} + \frac{1}{6^s} + … = \frac{1}{2^s} + \sum_{k=1}^\infty \frac{1}{(2k)^s} + \sum_{k=3}^\infty \frac{1}{(2k)^s} = 2 \sum_{k=1}^\infty \frac{1}{(2k)^s} – \frac{1}{4}. Not sure what is the Riemann zeta form of that. I also think that this is not a good idea to acheive the formula (if it has pattern, it might suggest the formula.) ================= ================= 1 Answer 1 ================= We first claim the following: Lemma. For any n∈Z+n\in\mathbf Z_+ it holds that f(n)=d(n2),f(n)=d(n^2), where d(⋅)d(\cdot) is the divisor function. Proof. We prove it by induction on the number of factors of kk. If kk is a power of a prime then the result is plainly true, so suppose that it holds when kk has NN distinct prime factors, and we will show that f(k⋅pγ)=f(k)f(pγ)f(k\cdot p^\gamma)=f(k)f(p^\gamma) for any prime pp that does not divide kk. Indeed, a pair (a,b)(a,b) satisfies that [a,b]=kpγ[a,b]=kp^\gamma iff either pγ|ap^\gamma|a, pj|bp^j|b (j=0,…,γj=0,\dots,\gamma) and [a/pγ,b/pj]=k[a/p^\gamma,b/p^j]=k or pj|ap^j|a (j=0,…,γ−1j=0,\dots,\gamma-1), pγ|bp^\gamma|b and [a/pj,b/pγ]=k[a/p^j,b/p^\gamma]=k. Since there are f(k)⋅(γ+1)f(k)\cdot (\gamma+1) possibilities in the former case, and f(k)⋅γf(k)\cdot \gamma in the latter, we have that the number of possible pairs is f(k)[γ+1+γ]=f(k)(2γ+1)=d(k2)d(p2γ)=d(k2p2γ)=f(kpγ).f(k)[\gamma+1+\gamma]=f(k)(2\gamma+1)=d(k^2)d(p^{2\gamma})=d(k^2p^{2\gamma})=f(kp^\gamma). This proves the lemma. ◼\blacksquare From the lemma, we see then that ∑n≥1f(n)ns=∑n≥1d(n2)ns.\sum_{n\geq1}\frac{f(n)}{n^s}=\sum_{n\geq1}\frac{d(n^2)}{n^s}. Since d(n2)d(n^2) is a multiplicative function, we know that ∑n≥1d(n2)ns=∏p{1+d(p2)ps+d(p4)p2s+…}\sum_{n\geq1}\frac{d(n^2)}{n^s}= \prod_{p}\left\{1+\frac{d(p^2)}{p^s}+\frac{d(p^4)}{p^{2s}}+\dots\right\} where the product is extended to all the primes. This is, ∑n≥1d(n2)ns=∏p{1+3ps+5p2s+…}=∏p∞∑l=02l+1psl=∏pps(ps+1)(ps−1)2=∏p11−ps∏p(1+p−s)∏p11−p−s=ζ(s)ζ(s)ζ(2s)ζ(s)=ζ(s)3ζ(2s).\begin{align}\sum_{n\geq1}\frac{d(n^2)}{n^s} &=\prod_{p}\left\{1+\frac{3}{p^s}+\frac{5}{p^{2s}}+\dots\right\}\\ &=\prod_p\sum_{l=0}^\infty\frac{2l+1}{p^{sl}}\\ &=\prod_p \frac{p^s \left(p^s+1\right)}{\left(p^s-1\right)^2}\\ &=\prod_p\frac1{1-p^s}\prod_p(1+p^{-s})\prod_p\frac1{1-p^{-s}}\\ &=\zeta(s)\frac{\zeta(s)}{\zeta(2s)}\zeta(s)\\ &=\frac{\zeta(s)^3}{\zeta(2s)}. \end{align} Hence ∑n≥1f(n)ns=ζ(s)3ζ(2s),whenever ℜ(s)>1.\sum_{n\geq1}\frac{f(n)}{n^s}=\frac{\zeta(s)^3}{\zeta(2s)},\qquad\text{whenever }\Re(s)>1. I would start with : if gcd(k,k′)=1gcd(k,k’) = 1 and lcm(m,n)=k,lcm(m′,n′)=k′lcm(m,n) = k, lcm(m’,n’) = k’ then lcm(mm′,nn′)=kk′lcm(mm’,nn’) = kk’ and conversely, if lcm(a,b)=kk′lcm(a,b) = kk’ then a=mm′,b=nn′,lcm(m,n)=k,lcm(m′,n′)=k′a = mm’,b = nn’, lcm(m,n)=k,lcm(m’,n’)=k’, thus f(k)f(k) is multiplicative. – user1952009 Oct 20 at 19:06 @iqcd I try to convince myself about your Lemma, but I am not sure if it is correct. If I do not calculate anything wrong, f(6)=|{(1,6),(2,6),(3,6),(6,6),(6,1),(6,2),(6,3)}|=7f(6) = |\{(1,6), (2,6), (3,6), (6,6), (6,1), (6,2), (6,3)\}| = 7 but d(36)=d(22)d(32)=9?d(36) = d(2^2)d(3^2) = 9 ? – Both Htob Oct 20 at 19:16 Oh, okay. It is just silly that I forget (2,3)(2, 3) and (3,2)(3, 2). Okay, so it seems that your Lemma is perfectly fine. Could you tell me the intuition ? I mean how can you recognize that this two functions are the same. I really have no idea that the number of ordered pairs for [m,n][m, n] is exactly the number of positve divisor of a square. – Both Htob Oct 20 at 19:21 @user1952009 Definitively a quicker and much better solution. I don’t know why I didn’t start trying that out first – iqcd Oct 20 at 19:22 @BothHtob I thought “If this function is multiplicative, then it is easy”. After proving it, I noticed that f(pγ)=2γ+1f(p^\gamma)=2\gamma+1 for each prime, and recalled that d(pγ)=γ+1d(p^\gamma)=\gamma+1, so the identification became evident. – iqcd Oct 20 at 19:24
proofpile-shard-0030-110
{ "provenance": "003.jsonl.gz:111" }
# Why churn prediction ≠ churn reduction, and what to do instead On the inherent shortcomings of churn prediction, and how customer retention can be improved with uplift modeling and dynamic point of cancellation offers. Rob Moore ## 1   Reducing customer churn is the best way to boost growth The very unfortunate truth about customer churn is that it scales with your business. Meaning, the more your business grows, the more customers you'll have cancelling every month. In order to keep pace with your expanding churn numbers, you'll either need to find magical exponential growth, reduce your churn, or risk hitting an early growth ceiling. Let's say, for example, our software-as-a-service (SaaS) product has an average monthly revenue of $50/customer. When we first start up, we're adding 100 customers a month, growing our MRR by$5,000 each month. In the second month, 7 of those customers cancel their subscriptions (a 7% churn rate - the industry average for B2C SaaS companies). Not a big deal - we're still netting 93 new customers! In the following month, another 7% churn - 13 of our 193 customers entering the month. But, we've added another 100 new customers, still netting 87 new subscriptions and a healthy $4,350 uptick in MRR. So we continue on our merry way adding to our top line revenue each month. Then, seemingly out of nowhere, churn rears its ugly head. Midway through our second year, we've reached 1400 customers, and guess what? Thanks to our 7% churn rate, we're only breaking even with the 100 new customers that sign up each month. Our customer base (and revenue) is no longer growing. We have officially hit our growth ceiling - 1,400 customers and$71,000 MRR. Unfortunately, this is far from a hypothetical example for me. Churn became a very real problem very quickly for Wavve.co. Luckily for all of us, a small reduction in churn can yield a massive boost in the growth ceiling. To continue with the numbers above, let's cut the churn rate from 7% to 5%. With this reduction, the growth ceiling suddenly jumps up from $71,000 MRR to$100,000 MRR - an increase of more than 40% and $350K ARR. Below, you can play around with how a 30% reduction in churn rate can impact future growth. Current Customer Count customers New Customers Per Month per month Monthly Revenue Per User$ per user Current Churn % per month $301,500 more revenue over 24 months with a 30% churn reduction. MRR growth in 24 months with a 30% churn reduction:$25,950 MRR growth in 24 months at previous pace: \$5,302 I would argue that reducing customer churn is the single best thing that most SaaS companies can be doing right now. As fun as growth is, keeping the customers you do have is far cheaper than acquiring new ones. With this in mind, let's take an in depth look at what can be done to reduce churn. First, we're going to be diving into churn prediction, where companies try to figure out which customers are going to cancel their subscriptions, and try to do something about it before that happens. This is often one of the first steps that a company will take once they realize their churn rate is becoming the limiting factor with their future growth. ## 2   How churn prediction works in practice At its core, churn prediction attempts to put customers into two buckets: 1. Customers who will churn 2. Customers who will not churn. Simple enough, right? As far as the data science is concerend, churn prediction can be treated as a standard classification problem. We can look at a user’s behavior – how long they’ve been a customer, when they were last active, how much they’ve been paying per month, whether they’ve used your new feature, etc. – and train a machine learning model to predict how likely that user is to cancel their subscription in the next month. There are a variety of machine learning models that can predict churn reasonably well. In practice, decision trees and logistic regression tend to be commonly used. I full-heartedly predict that we will see a shift towards recurrent neural net (RNN) models in the near future, as they provide a much more natural model for this problem - and I’ll be writing on that soon, but for now, let’s put aside which ML model is used, and turn our focus to how they are used. ### 2.1 What is done with churn predictions Once we’ve trained a model to predict how likely a customer is to churn in the next month - how can we use that information to improve our bottom line? In particular, how can we use it to prevent cancellations? Below, I've plotted a hypothetical distribution of our customers and their likelihood to churn in the next month following what a typical distribution may look like for a SaaS company. You'll note that I've also marked a cutoff at 80%, highlighting the customers that our model predicted had at least an 80% probability to churn in the coming month. In developing our churn prediction model, the general idea is that we pick out the customers that are deemed “likely to churn” and hit them with a retention campaign, hoping to reinvigorate their love of the product, and, with a bit of luck on our side, prevent them from cancelling their subscription. ### 2.2 The retention campaign A good retention campaign is product specific, but in general they tend to follow a lot of the same patterns. The objectives here are to (i) re-engage customers so they get more value out of your product and/or (ii) present a customer offer to get them to stick around. Common retention tactics can include: 1. An email newsletter to bring your product to the front of customer’s minds 1. A new product feature announcement 2. Showcase how other customers are using your product 2. Initiating “customer success” calls, or sending direct emails to customers you want to re-engage 3. Reminding users to use the credits left in their account (ex. Audible) 4. Offer a one-time discount for users (ex. Uber) 5. Offer to put a customer’s subscription on pause (ex. Audible) 6. Offer to switch (downgrade) a customer’s plan Depending on your product, these campaigns should see varying (but typically pretty strong) engagement rates. At Wavve.co, we’ve had nearly 35% of customers accept an offer to temporarily pause their account. So, we’re getting after the right customers using our churn prediction model, and the retention campaigns are sufficiently engaging – all is well, right? ## 3   Where churn prediction goes wrong Let’s briefly back up and refocus on what our end goal is. We set out to reduce churn. In an attempt to do this, we designed a machine learning model which sifted through customer data and found the customers who were most likely to cancel their accounts. We then targeted these high risk customers with retention campaigns, hoping to prevent them from churning. Unfortunately, it isn’t so simple. In this section, I’ll dig into what can go wrong with this pattern of churn prediction and customer re-engagement, including disturbing the “do-not-disturb” customers, sending discounts to people who wouldn’t have churned anyway, and the self-biasing nature of churn prediction once we start using its predictions to decide who to target with retention campaigns. ### 3.1 Churn prevention ≠ churn minimization When we marked a customer to be targeted with a retention campaign, we took action. We reached out to these people ― to ask if they wanted to check out our new feature, or if they wanted more credits or a discount, etc. And some of these customers, who would have otherwise cancelled their subscription, decided to stick around. Great! But, at the same time, by knocking on these user's doors, we’ve also prompted some customers, who would not have otherwise churned, to cancel their subscriptions. Just a few weeks ago, Audible sent me a reminder that I had 2 credits to use. As luck would have it, that was just the reminder I needed to cancel my subscription! Clearly, this is exactly the opposite result Audible had hoped for with their churn prevention campaign. #### 3.1.1 Four customers To make things concrete, let’s categorize our customers into four groups based on their churn behavior with and without us taking action on them by using a retention campaign. We’ll get into the details of using different retention campaigns later, but for now let’s look at simply either "targeted" and "not targeted" to represent customers who were treated with a retention campaign or not, respectively. Breaking these down: 1. Do-not-disturbs. Customers who won’t churn unless we use a retention campaign. This is me cancelling my Audible subscription when they told me I had credits to use. Otherwise, I wouldn’t have thought about it and would have stayed a subscriber. 2. Sure things. These are your adamant subscribers – the customers who will be with you next month whether or not we use a retention campaign on them. Love this group. 3. Lost causes. This is just the opposite. The customers in this group are going to cancel their subscription no matter what we do. 4. Persuadables. This is the group we want to go after with retention campaigns. These customers are going to churn if we don’t do anything but will stick around if we use a retention campaign on them. In an ideal world, our churn prevention techniques would strictly target the fourth group, the persuadables. In reality, our predictive models are far from perfect and, regrettably, we are actually losing profits every time we target a customer who belongs to any of the other three groups: 1. Do not disturbs. Worst case scenario – by intervening, we lose customers who would otherwise would have continued as subscribers. 2. Sure things. When we allocate retention campaigns to this group, we are lowering profits by giving credits, discounts, and other costly perks to customers who would have kept paying us at full cost. 3. Lost causes. For these customers, we are losing the cost of running the retention campaign. This can vary largely between companies. If we’re just sending an email, then not much lost. If we’re a B2B SaaS company doing manual outreach and customer calls, this can get quite costly. ### 3.2 Churn prediction is self-biasing Ok, so we’ve seen how important it is to nail down specifically the segment of customers who are planning to churn unless we do something about it - the persuadables. Unfortunately, when it comes to segmenting customers into these four groups, we will never have complete information. We can, at best, narrow down each customer into one of two groups. For example, if we don’t use a retention campaign and a customer churns, we can never be sure if they would have cancelled had we targeted them with a campaign – we’ll never know if they were a lost cause or a persuadable. As a result, if a machine learning model is continuously trained on live churn data, it will create a self-biasing feedback loop. The retention campaigns that we initiate against customers who we believe are high churn risks directly biases our results. We can, of course, create a control group where we withhold any retention efforts, but this naturally goes against our incentives to reduce churn as much as possible. ## 4   What we should do instead of predicting churn Churn prediction can provide a step in the right direction, but it is hardly a solution in itself. Fundamentally, we don’t really care which customers are likely to churn nearly as much as whether or not we can do something about it, and what that something might be. ### 4.1 Adopt the churn prediction model to a customer uplift model Recent research efforts have turned attention to customer uplift modeling in lieu of churn prediction. In short, customer uplift modeling changes the question from “will this customer churn” to “will a retention campaign prevent this customer from churning”. That is, how likely is it that this customer belongs to our persuadables group? Concretely, uplift modeling looks at the predicted behavior after treatment (treatment, in our case, being a retention campaign). This is an important distinction in comparison to churn prediction. Instead of trying to generally predict if a customer will cancel their subscription, we are aiming to predict expected behavior after different treatments. Because uplift modeling takes into account which treatment is used and observes the results after that treatment has been applied, it eliminates the self-biasing aspect inherent in churn prediction. Furthermore, we can not just answer whether or not a retention campaign will work, but we can start to answer which retention campaign would be most effective for our bottom line. For example, finding the smallest discount that we can offer to prevent a customer from churning. Uplift modeling, as a whole, has applications outside of SaaS business. It is being integrated into personalized medicine to optimize treatment for each individual. It also has other applications inside the SaaS world in targeting customers with up-selling or cross-selling offers. ### 4.2 A more direct solution: point of cancellation offers Customer uplift modeling is a very powerful tool. Alas, it takes a lot of time, developer resources, and data, to do well. A more practical, and nearly as effective, method is placing relevant offers at your customers' point of cancellation. That is, just as your customer is about to cancel, make them an offer they can’t refuse. Since we’re presenting offers only when a customer is right on the verge of cancelling, we can rest assured we’re not bothering our “do not disturb” customers. We also know that we’re not handing out unnecessary discounts to customers who don’t need any extra incentive to stay subscribed (our second group, the “sure things”). #### 4.2.1 Making the right offer The offers that resonate best with your customers is, in large part, business dependent. Some businesses have seasonality attached to them, where offering a “pause” instead of outright cancellation may be very effective. In other businesses, temporary discounts can prove very effective. These offers can be optimized by dynamically generating them based on customer attributes such as: • The monthly cost of the customer’s subscription • How long the customer has been subscribed • The billing interval of the subscription (monthly, annual, etc.) Moreover, a cancellation survey can be used to directly ask a customer why they are cancelling. This very direct method is often the most effective. Once you know exactly why a customer is cancelling, it’s a lot easier to make them an offer that appeals to them. In testing out which offers are most effective at retaining customers, we are essentially doing uplift modeling. We see how each offer is received by different customer profiles and can optimize our offers appropriately. ## 5   Implementing a churn solution with Churnkey To recap, we’ve looked at the four types of customers (i) the do-not-disturbs, (ii) the lost causes, (iii) the sure things, and (iv) the persuadables. We discussed how we lose money on targeting any of the first three groups with retention campaigns and offers. And then we outlined how churn prediction fundamentally cannot find just the fourth group, the persuadables, due to the biasing intervention of the retention campaigns. We followed this up with a discussion on changing the question from “who will churn” to “whose churn can we prevent, and how can we prevent it”. Two ways of taking this into practice were summarized: (i) implementing a full-blown customer uplift model and (ii) using point-of-cancellation surveys + dynamic offers. About a year and half ago, the team at Wavve.co (myself, Nick Fogle, Baird Hall) turned our full focus to reducing churn. Most effectively, this included adding a point of cancellation survey and dynamically generated offers for customers who were about to churn. The results have been way better than we imagined. We’ve retained 30% of customers who click “Cancel Subscription”. Roughly 70% of those customers decide to pause their account instead of cancelling outright, with the remainder continuing their subscription after accepting a temporary discount or after a live chat with our customer support (typically to answer some technical question). For the last 6 months, the three of us have joined forces with the talented Scott Hurff to build out Churnkey, a drop-in solution to do the same for other SaaS companies. Below are the real results from the last 30 days of using Churnkey on Wavve: 895 customers clicked “Cancel Subscription”, and Churnkey has managed to prevent a remarkable 37% (331) from cancelling. I hate to be salesy, but it’s hard to understate the impact of reducing churn on your bottom line, and we’d love nothing more than to help you continue to grow your business. Our results with early customers are overwhelmingly positive, and we're onboarding more companies each week. Below, I’ll briefly outline the current feature set of Churnkey and leave it to you whether or not you want to get a head start on churn prevention 😉 ### 5.1 Custom cancellation survey One of the best things you can do for your business is to keep a pulse on why people are cancelling their subscriptions. These are people that at one point or another decided to give you money. Now, something has changed. Knowing what that something is not only helps you figure out how you can improve your product, it can also help you figure out how you can get that potential churner to stick around with a custom offer. ### 5.2 Dynamic offers at the point of cancellation Based on a customer’s response to the cancellation survey, you can choose which offers you present. This means you can targeting customers with the offer most relevant to them (and the offer that’s most likely to keep them subscribed). At this stage, relevance is essential, because your customer is signalling that their relationship with your product isn't what it used to be. Here is your chance to modify that relationship for the better. As more customers experience your cancellation flow, you can continually tweak and improve it to figure out what copy and what offers are most effective at preventing cancellations. Since Churnkey handles implementation, this means you can instantly make updates to your cancellation experience without needing to touch your own codebase. In our experience, continuous optimization helped us improve customer retention from around 20-25% to rate consistently above 30%. This additional 40% retention improvement has had a huge effect on MRR and long-term revenue potential. With Churnkey’s built-in dashboard, keeping your eye on what offers are converting—and which aren't—is easier than ever. ### 5.4 Direct integration with Stripe (with others on the way!) As great as Stripe’s documentation is, it still takes significant dev time to roll out dynamic discounts, subscription pauses, and track their performance. We’ve got your back on this one. Simply connect your Stripe account to Churnkey and we can do the heavy lifting for you while using Stripe’s best practices. And if you do require a bit more of a custom solution, you can hook into our event callbacks to easily handle your unique business logic. We’re excited to be leveraging the latest churn prevention patterns to help you growth your SaaS businesses! Don’t hesitate to reach out and talk about how our product can work for you, especially if: • Your current monthly customer churn rate is greater than 5% • You have more than 100 active subscriptions (if you have more than one cancellation a day, Churnkey will almost certainly give your growth ceiling a significant bump) • You use Stripe as a payment provider (support for more payment providers coming soon)
proofpile-shard-0030-111
{ "provenance": "003.jsonl.gz:112" }
# Tracing fake news footprints: characterizing social media messages by how they propagate This week we’ll be looking at some of the papers from WSDM’18. To kick things off I’ve chosen a paper tackling the problem of detecting fake news on social media. One of the challenges here is that fake news messages (the better ones anyway), are crafted to look just like real news. So classifying messages based on their content can be difficult. The big idea in ‘Tracking fake news footprints’ is that the way a message spreads through a network gives a strong indication of the kind of information it contains. A key driving force behind the diffusion of information is its spreaders. People tend to spread information that caters to their interests and/or fits their system of belief. Hence, similar messages usually lead to similar traces of information diffusion: they are more likely to be spread from similar sources, by similar people, and in similar sequences. Since the diffusion information is pervasively available on social networks, in this work, we aim to investigate how the traces of information diffusion in terms of spreaders can be exploited to categorize a message. In a demonstration of the power of this idea, the authors ignore the content of the message altogether in their current work, and still manage to classify fake news more accurately than previous state-of-the-art systems. Future work will look at adding content clues back into the mix to if the results can be improved even further. TraceMiner consists of two main phases. Since information spreads between nodes in the network, we need a way to represent a node. Instead of 1-of-n (one hot) encoding, in the first phase a lower-dimensionality embedding is learned to represent a node. In other words, we seek to capture the essential characteristics of a node (based on friendship and community memberships). In the second phase the diffusion trace of a message is modelled as a sequence of its spreader nodes. A sequence classifier built using LSTM-RNNs is used to model the sequence and its final output is aggregated using softmax to produce a predicted class label. The first step utilizes network structures to embed social media users into space of low dimensionality, which alleviates the data sparsity of utilizing social media users as features. The second step represents user sequences of information diffusion, which allows for the classification of propagation pathways. Let’s take a closer look at each of these two steps, and then we’ll wrap up by seeing how well the method works compared to previous systems. ### Learning an embedding for social media users Social networks of interest for fake news spreading tend to be those with the largest reach – i.e., they have a lot of users. Hence there is comparatively little information about the majority of users in any given trace. Just as most words appear infrequently, and a few words very frequently: So most users appear infrequently in information diffusion traces, and a small number of users appear much more frequently: (Both of these plots are log-log). [The user and word frequency plots] both follow a power-law distribution, which motivates us to embed users into low dimensional vectors, as how embedding vectors of words are used in natural language processing. Two state of the art approaches for user embedding in social graphs are LINE and DeepWalk. LINE models first and second degree proximity, while DeepWalk captures node proximity using a random walk (nodes sampled together in one random walk preserve similarity in the latent space). These both capture the microscopic structure of networks. There is also larger mesoscopic structural information such as social dimensions and community membership we would like to capture (as is done for example by SocDim). … the ideal embedding method should be able to capture both local proximity and community structures… we propose a principled framework that directly models both kinds of information. The model combines an adjacency matrix and a two auxiliary community matrices, one capturing community membership and the other capturing representations of communities themselves as embeddings. Full details are in section 3.2 of the paper. ### Diffusion traces and sequence modeling The topology of information spreading is a tree rooted with the initial spreader (or perhaps a forest). Dealing directly with this tree structure leads to a state space explosion though: with n nodes (users), there can be $n^{n-2}$ different trees according to Cayley’s formula. The problem can be simplified by flattening the trees into a linear sequence of spreaders. For example, $\{(n_{35},t_1),(n_{12}, t_2), ...\}$ indicating that the message originate with node 35 at time $t_1$, was spread by node 12 at time $t_2$, and so on. The only requirement is that for any two nodes i and j in the sequence if i comes before j then i spread the message before j did. This reduces the number of possible diffusions networks to $n!$ (you know it’s a bad problem when we’re talking about reducing it to a factorial!). When the trees are flattened like this, some of the immediate causality is lost. For example, consider this tree and it’s flattened representation: The problem here is that the direct dependencies we’ve lost are important. “For example, the information flow from a controller account to botnet followers is a key signal in detecting crowdturfing.” They should still be close to each other in the sequence though. That’s something an RNN can cope with… We propose to use an RNN to sequentially accept each spreader of a message and recurrently project it into a latent space with the contextual information from previous spreaders in the sequence… In order to better encode the distant and separated dependencies, we further incorporate Long Short-Term Memory cells into the model, i.e., the LSTM-RNN. Now it’s likely that the account which first originated a message is a better predictor of its class than the last accounts to spread it. So the diffusion trace is sent through the LSTM-RNN in reverse so that the originator is seen as close as possible to when the final prediction is made. ### TraceMiner in action TraceMiner is evaluated alongside an SVM classifier trained on message content, and XGBoost fed with pre-processed content produced by the Stanford CoreNLP toolkit. “XGBoost presents the best results among all the content-based algorithms we tested.” Variants of TraceMiner that use only microscopic network structure for node embeddings (i.e., DeepWalk and LINE) are also tested, to assess the impact of the community embeddings. Experiments are run on two Twitter datasets: With the first dataset the challenge is to determine the news category, which is one of business; science and technology; entertainment; and medical. (On this task, you would expect the message-content based approaches to do well). The second dataset contains a 50:50 mix of genuine news and fake news, and the task is to tell them apart. For the first news categorisation task, a variety of models are tested, using differing percentages of the training data. Two different accuracy measures are reported: Macro-F1 is just the average F1 score across the four categories, and Micro-F1 is the harmonic mean of the precision and recall scores. (The point of the Micro-F1 score is to be less sensitive to class imbalances). On the Micro-F1 measure, TraceMiner does best across the board, and outperforms all other approaches on the Macro-F1 score until the amount of data used for training goes above the 80% threshold. XGBoost (which looks at message content) does best on this measure at the 80% mark. With fake news, as we would hope, the advantage of the TraceMiner approach is even more distinct (based on the hypothesis that content is less distinguishing between real and fake news): Unlike posts related to news where the content information is more self-explanatory, content of posts about fake news is less descriptive. Intentional spreaders of fake news may manipulate the content to make it look more similar to non-rumor information. Hence, TraceMiner can be useful for many emerging tasks in social media where adversarial attacks are present, such as detecting rumors and crowdturfing. Now, in the case of fake news it is perhaps of some utility to be able to say after a piece of news has spread that “yes, that was fake news.” But by then it will have done it’s work and exposed its message to many people. So it’s notable in the table above that TraceMiner still performs well even with little training data. … optimal performance with very little training information is of crucial significance for tasks which emphasise earliness. For example, detecting fake news at an early stage is way more meaningful that detecting it when 90% of its information is known. ## 8 thoughts on “Tracing fake news footprints: characterizing social media messages by how they propagate” 1. Missing image on latest iPhone safari? Aftet the paragraph “When the trees are flattened like this, some of the immediate causality is lost. For example, consider this tree and it’s flattened representation:” there is no diagram for me just next text section. 1. Oops, yes. That sketch is missing from the email and the blog post. I’ll add it to the online blog post asap… Thanks, A.
proofpile-shard-0030-112
{ "provenance": "003.jsonl.gz:113" }
# Comparison of the densities of water, potassium mercury iodide, and mercury Comparing the densities of mercury, potassium tetraiodomercurate(II) solution, and water illustrates that barometers made from the three liquids have different heights. These three containers hold approximately equal masses (about 300 grams) of water, potassium mercury iodide solution, and mercury. A mercury barometer is about 2.2 feet tall. Potassium mercury iodide [potassium tetraiodomercurate(II) solution] has a different density and volume and can can be used to make an eleven or twelve foot barometer. Water, the lest dense of the three, would make a 40 foot barometer. Credits: • Design and Demonstration
proofpile-shard-0030-113
{ "provenance": "003.jsonl.gz:114" }
# Calculate this Spring's Potential Energy Homework Statement: If we have object with m = 1 hanging on a spring and elongation is h = 0.02. What is the potential energy of the spring after it's being stretched? Relevant Equations: E = mgh I know that gravitational potential energy is decreased by E = -m g h = -1 10 0.02 = -0.2. So, the spring potential energy must be E=0.2 (Joule). However, in the answer's sheet I have E=0.1 What mistake do I make? Last edited: PhanthomJay Homework Helper Gold Member The elongation is apparently 0.02 m when the mass is held by your hand and slowly allowed to reach its rest position when you release your hand. Your hand does work in this case which you have neglected. Or alternatively, if you release the mass at h = 0 , it will have kinetic energy as well as it passes h= 0.02. Look at another approach. I'm going to assume m = 1 kg, h = 0.02 m, and g = 10 m/s2. The force that the spring exerts on the weight is not constant. It depends on how far the spring is stretched. kuruman Homework Helper Gold Member The question is ill posed. The zero of potential energy can be chosen at will. Without specifying where the zero is, the question cannot be answered. To be specific about the elastic energy, in a horizontal spring-mass system the zero is normally chosen at the relaxed position of the spring; in a vertical spring-mass system the zero is normally chosen at the equilibrium position where the net force on the hanging mass is zero. In this question we are asked to find the potential energy of the spring, not the mass, so mgh is not part of the answer. archaic Elastic potential energy is not the opposite of gravitational potential energy. In the case of vertical springs, however, the weight of the object and the elastic force are. Recall that ##|\vec F_s|=kx=mg=|\vec F_g|##, that ##U_s=\frac{1}{2}kx^2## and that ##U_g=-mgx## where ##x## is the elongation of the spring. Try to find a relation between both potential energies. Hint : ##\frac{U_g}{U_s} = ?## NB : here we choose ##U_g=-mg\int_{h_i}^{h_f}\mathrm{dh}##. I think you are making mistake here. If hand doesnt move - displacement is 0. A = F S = F * 0 = 0 I'm going to assume m = 1 kg, h = 0.02 m, and g = 10 m/s2. The force that the spring exerts on the weight is not constant. It depends on how far the spring is stretched. Spring is stretched by h = 0.02 or ## \Delta x = 0.02 ## The question is ill posed. The zero of potential energy can be chosen at will. Without specifying where the zero is, the question cannot be answered. To be specific about the elastic energy, in a horizontal spring-mass system the zero is normally chosen at the relaxed position of the spring; in a vertical spring-mass system the zero is normally chosen at the equilibrium position where the net force on the hanging mass is zero. In this question we are asked to find the potential energy of the spring, not the mass, so mgh is not part of the answer. Maybe I translated poorly. I'm asking what is the potential energy of the spring After it's being stretched (I will change it in the problem statement also, thanks). So, I'm just calculating the change of potential energy - so it doesn't really matter where I take zero Guys it's Introductory physics homework - it should be easy, don't overthink please :) I'm just missing something really easy Guys it's Introductory physics homework - it should be easy, don't overthink please :) I'm just missing something really easy If you follow the advice I gave you, you'll find that ##U_g=-2U_s##, you only need to plug in what you have now. Try to do it yourself, though. Another hint : ##kx^2=kxx##, ##|\vec F_g|=|\vec F_s|##. Elastic potential energy is not the opposite of gravitational potential energy. In the case of vertical springs, however, the weight of the object and the elastic force are. Recall that ##|\vec F_s|=kx=mg=|\vec F_g|##, that ##U_s=\frac{1}{2}kx^2## and that ##U_g=-mgx## where ##x## is the elongation of the spring. Try to find a relation between both potential energies. Hint : ##\frac{U_g}{U_s} = ?## NB : here we choose ##U_g=-mg\int_{h_i}^{h_f}\mathrm{dh}##. EEristavi PhanthomJay Homework Helper Gold Member I think you are making mistake here. If hand doesnt move - displacement is 0. A = F S = F * 0 = 0 As noted by kuruman, the question is ill posed. I am assuming that when the mass is first placed onto the hanging spring , it is held there by your hand and slowly lowered to the point at 0.02 m where when you release your hand, the mass doesn’t move and the spring stops stretching. Your hand does work because it is displaced 0.02 m in this process. So you can’t say that spring final PE is equal to initial gravitational PE because other work is done. You should explore the fact that if the mass is suddenly released by your hand at the very start and you immediately let go without doing work, that the spring will displace 0.04 m before it stops and then rise up again and down again until ultimately it is damped out and settles at the 0.02 m mark. Your hand does work because it is displaced 0.02 m in this process. So you can’t say that spring final PE is equal to initial gravitational PE because other work is done. This is the situation when you move your reference point or potential's zero with mass. My reference point was not changing with respect to "Earth". These are 2 different points of view and both are correct. Hope you understood what i'm saying. So, This doesn't mean that my approach isn't correct. ##Ug=−2Us## Ok. Now let me think: why do we have times 2 with Spring potential If you follow the advice I gave you, you'll find that ##U_g=-2U_s##, you only need to plug in what you have now. Try to do it yourself, though. Another hint : ##kx^2=kxx##, ##|\vec F_g|=|\vec F_s|##. I now that if my answer is divided by 2 - it's correct. I cant understand why there is 2.... archaic Our variables : ##|\vec F_s|=kx=mg=|\vec F_g|##, ##U_s=\frac{1}{2}kx^2## and ##U_g=-mgx##. $$\frac{U_g}{U_s}=\frac{-mgx}{\frac{1}{2}kx^2}=\frac{-mgx}{\frac{1}{2}kxx}$$ From the first equation we have ##kx=mg##. $$\frac{U_g}{U_s}=\frac{-mgx}{\frac{1}{2}(kx)x}=\frac{-mgx}{\frac{1}{2}mgx}=-\frac{1}{\frac{1}{2}}=-2$$ Or, in other words, gravitational potential energy ##U_g=-2U_s## elastic (spring's) potential energy. Last edited: EEristavi Woow... now I get it!!!!! Very nice! Now I have to think what's the physics behind it (this I will manage on my own).. :D Thank you very much! kuruman Homework Helper Gold Member Woow... now I get it!!!!! Very nice! Now I have to think what's the physics behind it (this I will manage on my own).. :D Thank you very much! You have to understand that the ratio of -2 is not always the case but only if the zero of potential gravitational energy is taken at the same point as the zero of elastic energy. In general, $$\frac{U_g}{U_s}=-\frac{mg(x-x_{0g})}{\frac{1}{2}k{\left(x^2-x_{0s}^2\right)}}$$where ##x_{0g}## and ##x_{0s}## are, respectively, the points where the gravitational and spring potential energy are zero. The ratio is -2 only if ##x_{0g}=x_{0s}=0##. Last edited: archaic and EEristavi You have to understand that the ratio of -2 is not always the case but only if the zero of potential gravitational energy is taken at the same point as the zero of elastic energy. In general, $$\frac{U_g}{U_s}=\frac{mg(x-x_{0g})}{\frac{1}{2}k{\left(x^2-x_{0s}^2\right)}}$$where ##x_{0g}## and ##x_{0s}## are, respectively, the points where the gravitational and spring potential energy are zero. The ratio is -2 only if ##x_{0g}=x_{0s}=0##. In my example, ##x## is the elongation, I should've written it as You have to understand that the ratio of -2 is not always the case but only if the zero of potential gravitational energy is taken at the same point as the zero of elastic energy. In general, $$\frac{U_g}{U_s}=\frac{mg(x-x_{0g})}{\frac{1}{2}k{\left(x^2-x_{0s}^2\right)}}$$where ##x_{0g}## and ##x_{0s}## are, respectively, the points where the gravitational and spring potential energy are zero. The ratio is -2 only if ##x_{0g}=x_{0s}=0##. Isn't the general formula for EPE ##U_s=\int_{l_0}^lk(x-l_0)dx=\frac{k(l^2-l_0^2)}{2}-kl_0(l-l_0)## if we are taking points not displacement? I think you should specify that you're taking the spring's ##l_0## point as ##x=0##. jbriggs444 Homework Helper I cant understand why there is 2.... That 2 is the same as the 2 in the formula for the area of a triangle: a = 1/2 base * height -- average width times total height. The work done against the spring as it elongates to its final extension varies from zero to the full force. The average force is equal to 1/2 the full force. The work done against the spring is equal to the average force times the distance extended. It is also the same as the 2 in ##\int x\ dx = \frac{1}{2}x^2## Which is, by no coincidence, the same as the 2 in the formula for potential energy of a spring, ##E=\frac{1}{2}kx^2## Guys I understood it mathematically. However, I cant figure it out as "Change of energy concept" - If the change of potential energy is ## mg \Delta x##, why the spring's potential energy isn't increased the same amount. As I understand we have a closed system (Or Isn't it?.....). This is the root of my mistake kuruman Homework Helper Gold Member Are you asking why mechanical energy is not conserved? To examine mechanical energy conservation, you need to have mass ##m## move from point A to point B. What are points A and B in this case, how does the mass move from A to B and what is the mechanical energy at each of the two points? I have 2 Questions: 1. is mechanical energy conserved? 2. If it's not, why? Note: As I see it's not conserved. Am I right? kuruman Homework Helper Gold Member I have 2 Questions: 1. is mechanical energy conserved? 2. If it's not, why? Note: As I see it's not conserved. Am I right? What are points A and B in this case point A - where spring isn't stretched yet. Point B - where spring is stretched by ## \Delta x ## how does the mass move from A to B It moves because of the gravity. other force that is involved is force from spring F = k x what is the mechanical energy at each of the two points if we take "starting point or 0 (zero)" to be point A: Mechanical Energy at point A - 0; Mechanical Energy at point B - ## mg \Delta x## P.S. one way or another - Change of energy will always be ## mg \Delta x## kuruman
proofpile-shard-0030-114
{ "provenance": "003.jsonl.gz:115" }
plottools - Maple Programming Help Home : Support : Online Help : Graphics : Packages : Plot Tools : plottools/curve plottools curve generate 2-D or 3-D plot object for a curve Calling Sequence curve([[x1,y1], [x2, y2], ...], options) curve([[x1, y1, z1], [x2, y2, z2], ...], options) Parameters [x1, y1], [x2, y2], ... - list of points in 2-D [x1, y1, z1], [x2, y2, z2], ... - list of points in 3-D options - (optional) equations of the form option=value. For a complete list, see plot/options and plot3d/options. Description • The curve command creates a two- or three-dimensional plot data object, which when displayed is a curve joining points in the specified list. The first argument to curve must be a list of points.  They can be either 2-D or 3-D. • The plot data object produced by the curve command can be used in a PLOT or PLOT3D data structure, or displayed using the plots[display] command. • Remaining arguments are interpreted as options, which are specified as equations of the form option = value.  For more information, see plottools, plot/options and plot3d/options. Examples > $\mathrm{with}\left(\mathrm{plottools}\right):$ > $\mathrm{with}\left(\mathrm{plots}\right):$ > $\mathrm{display}\left(\mathrm{curve}\left(\left[\left[0,0\right],\left[3,4\right]\right],\mathrm{color}=\mathrm{red},\mathrm{linestyle}=\mathrm{dash},\mathrm{thickness}=2\right)\right)$ > $\mathrm{display}\left(\mathrm{curve}\left(\left[\left[0,0,0\right],\left[1,1,1\right],\left[1,1,0\right],\left[1,2,1\right],\left[0,0,0\right]\right]\right),\mathrm{axes}=\mathrm{frame},\mathrm{color}=\mathrm{green},\mathrm{orientation}=\left[-70,40\right],\mathrm{thickness}=3\right)$
proofpile-shard-0030-115
{ "provenance": "003.jsonl.gz:116" }
# What are the resistors bridging connections between devices generally implemented for? Every time I see this type of implementation of a resistor I tend to question it's use (the 4.7k in this case). Consider a case where the RFM module is not connected to the 4.7k resistors. Clearly we have a voltage divider configuration that governs a 0V at the upper terminal of the 10k resistor when the output of the MCU is LOW. However, when the output of the MCU is HIGH, we would have approx. 3.4V at the output (which would generally be considered a logic HIGH as well). However, how does the connection of the RFM module affect this configuration? Am I looking at this all wrong and the purpose of the two sets of resistors is not a V divider? As the pins of an IC generally HIGH-Z to not affect the V divider configuration shown above? In general, what are these types of "bridging" resistors used for? I can see in certain applications they are used for current limiting (like the case of a resistor connecting the base of a transistor), but I have seen other instances where I am unsure. • 10kΩ || high-Z ≈ 10kΩ Connecting the voltage divider to a high-Z input should have a negligible effect on the voltage divider voltage. Feb 15, 2014 at 4:47
proofpile-shard-0030-116
{ "provenance": "003.jsonl.gz:117" }
OpenCV  4.7.0-dev Open Source Computer Vision Graph API: Pixelwise operations ## Functions GMat cv::gapi::bitwise_and (const GMat &src1, const GMat &src2) computes bitwise conjunction of the two matrixes (src1 & src2) Calculates the per-element bit-wise logical conjunction of two matrices of the same size. More... GMat cv::gapi::bitwise_and (const GMat &src1, const GScalar &src2) GMat cv::gapi::bitwise_not (const GMat &src) Inverts every bit of an array. More... GMat cv::gapi::bitwise_or (const GMat &src1, const GMat &src2) computes bitwise disjunction of the two matrixes (src1 | src2) Calculates the per-element bit-wise logical disjunction of two matrices of the same size. More... GMat cv::gapi::bitwise_or (const GMat &src1, const GScalar &src2) GMat cv::gapi::bitwise_xor (const GMat &src1, const GMat &src2) computes bitwise logical "exclusive or" of the two matrixes (src1 ^ src2) Calculates the per-element bit-wise logical "exclusive or" of two matrices of the same size. More... GMat cv::gapi::bitwise_xor (const GMat &src1, const GScalar &src2) GMat cv::gapi::cmpEQ (const GMat &src1, const GMat &src2) Performs the per-element comparison of two matrices checking if elements from first matrix are equal to elements in second. More... GMat cv::gapi::cmpEQ (const GMat &src1, const GScalar &src2) GMat cv::gapi::cmpGE (const GMat &src1, const GMat &src2) Performs the per-element comparison of two matrices checking if elements from first matrix are greater or equal compare to elements in second. More... GMat cv::gapi::cmpGE (const GMat &src1, const GScalar &src2) GMat cv::gapi::cmpGT (const GMat &src1, const GMat &src2) Performs the per-element comparison of two matrices checking if elements from first matrix are greater compare to elements in second. More... GMat cv::gapi::cmpGT (const GMat &src1, const GScalar &src2) GMat cv::gapi::cmpLE (const GMat &src1, const GMat &src2) Performs the per-element comparison of two matrices checking if elements from first matrix are less or equal compare to elements in second. More... GMat cv::gapi::cmpLE (const GMat &src1, const GScalar &src2) GMat cv::gapi::cmpLT (const GMat &src1, const GMat &src2) Performs the per-element comparison of two matrices checking if elements from first matrix are less than elements in second. More... GMat cv::gapi::cmpLT (const GMat &src1, const GScalar &src2) GMat cv::gapi::cmpNE (const GMat &src1, const GMat &src2) Performs the per-element comparison of two matrices checking if elements from first matrix are not equal to elements in second. More... GMat cv::gapi::cmpNE (const GMat &src1, const GScalar &src2) GMat cv::gapi::select (const GMat &src1, const GMat &src2, const GMat &mask) Select values from either first or second of input matrices by given mask. The function set to the output matrix either the value from the first input matrix if corresponding value of mask matrix is 255, or value from the second input matrix (if value of mask matrix set to 0). More... gapi_math ## ◆ bitwise_and() [1/2] GMat cv::gapi::bitwise_and ( const GMat & src1, const GMat & src2 ) Python: cv.gapi.bitwise_and(src1, src2) -> retval #include <opencv2/gapi/core.hpp> computes bitwise conjunction of the two matrixes (src1 & src2) Calculates the per-element bit-wise logical conjunction of two matrices of the same size. In case of floating-point matrices, their machine-specific bit representations (usually IEEE754-compliant) are used for the operation. In case of multi-channel matrices, each channel is processed independently. Output matrix must have the same size and depth as the input matrices. Supported matrix data types are CV_8UC1, CV_8UC3, CV_16UC1, CV_16SC1, CV_32FC1. Note Function textual ID is "org.opencv.core.pixelwise.bitwise_and" Parameters src1 first input matrix. src2 second input matrix. ## ◆ bitwise_and() [2/2] GMat cv::gapi::bitwise_and ( const GMat & src1, const GScalar & src2 ) Python: cv.gapi.bitwise_and(src1, src2) -> retval #include <opencv2/gapi/core.hpp> This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Note Function textual ID is "org.opencv.core.pixelwise.bitwise_andS" Parameters src1 first input matrix. src2 scalar, which will be per-lemenetly conjuncted with elements of src1. ## ◆ bitwise_not() GMat cv::gapi::bitwise_not ( const GMat & src ) Python: cv.gapi.bitwise_not(src) -> retval #include <opencv2/gapi/core.hpp> Inverts every bit of an array. The function bitwise_not calculates per-element bit-wise inversion of the input matrix: $\texttt{dst} (I) = \neg \texttt{src} (I)$ In case of floating-point matrices, their machine-specific bit representations (usually IEEE754-compliant) are used for the operation. In case of multi-channel matrices, each channel is processed independently. Output matrix must have the same size and depth as the input matrix. Supported matrix data types are CV_8UC1, CV_8UC3, CV_16UC1, CV_16SC1, CV_32FC1. Note Function textual ID is "org.opencv.core.pixelwise.bitwise_not" Parameters src input matrix. ## ◆ bitwise_or() [1/2] GMat cv::gapi::bitwise_or ( const GMat & src1, const GMat & src2 ) Python: cv.gapi.bitwise_or(src1, src2) -> retval #include <opencv2/gapi/core.hpp> computes bitwise disjunction of the two matrixes (src1 | src2) Calculates the per-element bit-wise logical disjunction of two matrices of the same size. In case of floating-point matrices, their machine-specific bit representations (usually IEEE754-compliant) are used for the operation. In case of multi-channel matrices, each channel is processed independently. Output matrix must have the same size and depth as the input matrices. Supported matrix data types are CV_8UC1, CV_8UC3, CV_16UC1, CV_16SC1, CV_32FC1. Note Function textual ID is "org.opencv.core.pixelwise.bitwise_or" Parameters src1 first input matrix. src2 second input matrix. ## ◆ bitwise_or() [2/2] GMat cv::gapi::bitwise_or ( const GMat & src1, const GScalar & src2 ) Python: cv.gapi.bitwise_or(src1, src2) -> retval #include <opencv2/gapi/core.hpp> This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Note Function textual ID is "org.opencv.core.pixelwise.bitwise_orS" Parameters src1 first input matrix. src2 scalar, which will be per-lemenetly disjuncted with elements of src1. ## ◆ bitwise_xor() [1/2] GMat cv::gapi::bitwise_xor ( const GMat & src1, const GMat & src2 ) Python: cv.gapi.bitwise_xor(src1, src2) -> retval #include <opencv2/gapi/core.hpp> computes bitwise logical "exclusive or" of the two matrixes (src1 ^ src2) Calculates the per-element bit-wise logical "exclusive or" of two matrices of the same size. In case of floating-point matrices, their machine-specific bit representations (usually IEEE754-compliant) are used for the operation. In case of multi-channel matrices, each channel is processed independently. Output matrix must have the same size and depth as the input matrices. Supported matrix data types are CV_8UC1, CV_8UC3, CV_16UC1, CV_16SC1, CV_32FC1. Note Function textual ID is "org.opencv.core.pixelwise.bitwise_xor" Parameters src1 first input matrix. src2 second input matrix. ## ◆ bitwise_xor() [2/2] GMat cv::gapi::bitwise_xor ( const GMat & src1, const GScalar & src2 ) Python: cv.gapi.bitwise_xor(src1, src2) -> retval #include <opencv2/gapi/core.hpp> This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Note Function textual ID is "org.opencv.core.pixelwise.bitwise_xorS" Parameters src1 first input matrix. src2 scalar, for which per-lemenet "logical or" operation on elements of src1 will be performed. ## ◆ cmpEQ() [1/2] GMat cv::gapi::cmpEQ ( const GMat & src1, const GMat & src2 ) Python: cv.gapi.cmpEQ(src1, src2) -> retval #include <opencv2/gapi/core.hpp> Performs the per-element comparison of two matrices checking if elements from first matrix are equal to elements in second. The function compares elements of two matrices src1 and src2 of the same size: $\texttt{dst} (I) = \texttt{src1} (I) == \texttt{src2} (I)$ When the comparison result is true, the corresponding element of output array is set to 255. The comparison operations can be replaced with the equivalent matrix expressions: $\texttt{dst} = \texttt{src1} == \texttt{src2}$ Output matrix of depth CV_8U must have the same size and the same number of channels as the input matrices. Supported input matrix data types are CV_8UC1, CV_8UC3, CV_16UC1, CV_16SC1, CV_32FC1. Note Function textual ID is "org.opencv.core.pixelwise.compare.cmpEQ" Parameters src1 first input matrix. src2 second input matrix/scalar of the same depth as first input matrix. min, max, threshold, cmpNE ## ◆ cmpEQ() [2/2] GMat cv::gapi::cmpEQ ( const GMat & src1, const GScalar & src2 ) Python: cv.gapi.cmpEQ(src1, src2) -> retval #include <opencv2/gapi/core.hpp> This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Note Function textual ID is "org.opencv.core.pixelwise.compare.cmpEQScalar" ## ◆ cmpGE() [1/2] GMat cv::gapi::cmpGE ( const GMat & src1, const GMat & src2 ) Python: cv.gapi.cmpGE(src1, src2) -> retval #include <opencv2/gapi/core.hpp> Performs the per-element comparison of two matrices checking if elements from first matrix are greater or equal compare to elements in second. The function compares elements of two matrices src1 and src2 of the same size: $\texttt{dst} (I) = \texttt{src1} (I) >= \texttt{src2} (I)$ When the comparison result is true, the corresponding element of output array is set to 255. The comparison operations can be replaced with the equivalent matrix expressions: $\texttt{dst} = \texttt{src1} >= \texttt{src2}$ Output matrix of depth CV_8U must have the same size and the same number of channels as the input matrices. Supported input matrix data types are CV_8UC1, CV_8UC3, CV_16UC1, CV_16SC1, CV_32FC1. Note Function textual ID is "org.opencv.core.pixelwise.compare.cmpGE" Parameters src1 first input matrix. src2 second input matrix/scalar of the same depth as first input matrix. min, max, threshold, cmpLE, cmpGT, cmpLT ## ◆ cmpGE() [2/2] GMat cv::gapi::cmpGE ( const GMat & src1, const GScalar & src2 ) Python: cv.gapi.cmpGE(src1, src2) -> retval #include <opencv2/gapi/core.hpp> This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Note Function textual ID is "org.opencv.core.pixelwise.compare.cmpLGEcalar" ## ◆ cmpGT() [1/2] GMat cv::gapi::cmpGT ( const GMat & src1, const GMat & src2 ) Python: cv.gapi.cmpGT(src1, src2) -> retval #include <opencv2/gapi/core.hpp> Performs the per-element comparison of two matrices checking if elements from first matrix are greater compare to elements in second. The function compares elements of two matrices src1 and src2 of the same size: $\texttt{dst} (I) = \texttt{src1} (I) > \texttt{src2} (I)$ When the comparison result is true, the corresponding element of output array is set to 255. The comparison operations can be replaced with the equivalent matrix expressions: $\texttt{dst} = \texttt{src1} > \texttt{src2}$ Output matrix of depth CV_8U must have the same size and the same number of channels as the input matrices/matrix. Supported input matrix data types are CV_8UC1, CV_16UC1, CV_16SC1, CV_32FC1. Note Function textual ID is "org.opencv.core.pixelwise.compare.cmpGT" Parameters src1 first input matrix. src2 second input matrix/scalar of the same depth as first input matrix. min, max, threshold, cmpLE, cmpGE, cmpLT ## ◆ cmpGT() [2/2] GMat cv::gapi::cmpGT ( const GMat & src1, const GScalar & src2 ) Python: cv.gapi.cmpGT(src1, src2) -> retval #include <opencv2/gapi/core.hpp> This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Note Function textual ID is "org.opencv.core.pixelwise.compare.cmpGTScalar" ## ◆ cmpLE() [1/2] GMat cv::gapi::cmpLE ( const GMat & src1, const GMat & src2 ) Python: cv.gapi.cmpLE(src1, src2) -> retval #include <opencv2/gapi/core.hpp> Performs the per-element comparison of two matrices checking if elements from first matrix are less or equal compare to elements in second. The function compares elements of two matrices src1 and src2 of the same size: $\texttt{dst} (I) = \texttt{src1} (I) <= \texttt{src2} (I)$ When the comparison result is true, the corresponding element of output array is set to 255. The comparison operations can be replaced with the equivalent matrix expressions: $\texttt{dst} = \texttt{src1} <= \texttt{src2}$ Output matrix of depth CV_8U must have the same size and the same number of channels as the input matrices. Supported input matrix data types are CV_8UC1, CV_8UC3, CV_16UC1, CV_16SC1, CV_32FC1. Note Function textual ID is "org.opencv.core.pixelwise.compare.cmpLE" Parameters src1 first input matrix. src2 second input matrix/scalar of the same depth as first input matrix. min, max, threshold, cmpGT, cmpGE, cmpLT ## ◆ cmpLE() [2/2] GMat cv::gapi::cmpLE ( const GMat & src1, const GScalar & src2 ) Python: cv.gapi.cmpLE(src1, src2) -> retval #include <opencv2/gapi/core.hpp> This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Note Function textual ID is "org.opencv.core.pixelwise.compare.cmpLEScalar" ## ◆ cmpLT() [1/2] GMat cv::gapi::cmpLT ( const GMat & src1, const GMat & src2 ) Python: cv.gapi.cmpLT(src1, src2) -> retval #include <opencv2/gapi/core.hpp> Performs the per-element comparison of two matrices checking if elements from first matrix are less than elements in second. The function compares elements of two matrices src1 and src2 of the same size: $\texttt{dst} (I) = \texttt{src1} (I) < \texttt{src2} (I)$ When the comparison result is true, the corresponding element of output array is set to 255. The comparison operations can be replaced with the equivalent matrix expressions: $\texttt{dst} = \texttt{src1} < \texttt{src2}$ Output matrix of depth CV_8U must have the same size and the same number of channels as the input matrices/matrix. Supported input matrix data types are CV_8UC1, CV_8UC3, CV_16UC1, CV_16SC1, CV_32FC1. Note Function textual ID is "org.opencv.core.pixelwise.compare.cmpLT" Parameters src1 first input matrix. src2 second input matrix/scalar of the same depth as first input matrix. min, max, threshold, cmpLE, cmpGE, cmpGT ## ◆ cmpLT() [2/2] GMat cv::gapi::cmpLT ( const GMat & src1, const GScalar & src2 ) Python: cv.gapi.cmpLT(src1, src2) -> retval #include <opencv2/gapi/core.hpp> This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Note Function textual ID is "org.opencv.core.pixelwise.compare.cmpLTScalar" ## ◆ cmpNE() [1/2] GMat cv::gapi::cmpNE ( const GMat & src1, const GMat & src2 ) Python: cv.gapi.cmpNE(src1, src2) -> retval #include <opencv2/gapi/core.hpp> Performs the per-element comparison of two matrices checking if elements from first matrix are not equal to elements in second. The function compares elements of two matrices src1 and src2 of the same size: $\texttt{dst} (I) = \texttt{src1} (I) != \texttt{src2} (I)$ When the comparison result is true, the corresponding element of output array is set to 255. The comparison operations can be replaced with the equivalent matrix expressions: $\texttt{dst} = \texttt{src1} != \texttt{src2}$ Output matrix of depth CV_8U must have the same size and the same number of channels as the input matrices. Supported input matrix data types are CV_8UC1, CV_8UC3, CV_16UC1, CV_16SC1, CV_32FC1. Note Function textual ID is "org.opencv.core.pixelwise.compare.cmpNE" Parameters src1 first input matrix. src2 second input matrix/scalar of the same depth as first input matrix. min, max, threshold, cmpEQ ## ◆ cmpNE() [2/2] GMat cv::gapi::cmpNE ( const GMat & src1, const GScalar & src2 ) Python: cv.gapi.cmpNE(src1, src2) -> retval #include <opencv2/gapi/core.hpp> This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Note Function textual ID is "org.opencv.core.pixelwise.compare.cmpNEScalar" ## ◆ select() GMat cv::gapi::select ( const GMat & src1, const GMat & src2, const GMat & mask ) Python: #include <opencv2/gapi/core.hpp> Select values from either first or second of input matrices by given mask. The function set to the output matrix either the value from the first input matrix if corresponding value of mask matrix is 255, or value from the second input matrix (if value of mask matrix set to 0). Input mask matrix must be of CV_8UC1 type, two other inout matrices and output matrix should be of the same type. The size should be the same for all input and output matrices. Supported input matrix data types are CV_8UC1, CV_8UC3, CV_16UC1, CV_16SC1, CV_32FC1. Note Function textual ID is "org.opencv.core.pixelwise.select" Parameters src1 first input matrix. src2 second input matrix. mask mask input matrix.
proofpile-shard-0030-117
{ "provenance": "003.jsonl.gz:118" }
# Hausdorff space not completely Hausdorff On the set $\mathbbmss{Z}^{+}$ of strictly positive integers, let $a$ and $b$ be two different integers $b\neq 0$ and consider the set $S(a,b)=\{a+kb\in\mathbbmss{Z}^{+}\colon k\in\mathbbmss{Z}\}$ such set is the infinite arithmetic progression of positive integers with difference $b$ and containing $a$. The collection of all $S(a,b)$ sets is a basis for a topology on $\mathbbmss{Z}^{+}$. We will use a coarser topology induced by the following basis: $\mathbbmss{B}=\{S(a,b):\gcd(a,b)=1\}$ ## The collection $\mathbbmss{B}$ is basis for a topology on $\mathbbmss{Z}^{+}$ We first prove such collection is a basis. Suppose $x\in S(a,b)\cap S(c,d)$. By Euclid’s algorithm we have $S(a,b)=S(x,b)$ and $S(c,d)=S(x,d)$ and $x\in S(x,bd)\subset S(x,d)\cap S(c,d)$ besides, since $\gcd(x,b)=1$ and $\gcd(x,d)=1$ then $\gcd(x,bd)=1$ so $x$ and $bd$ are coprimes and $S(x,bd)\in\mathbbmss{B}$. This concludes the proof that $\mathbbmss{B}$ is indeed a basis for a topology on $\mathbbmss{Z}^{+}$. ## The topology on $\mathbbmss{Z}^{+}$ induced by $\mathbbmss{B}$ is Hausdorff Let $m,n$ integers two different integers. We need to show that there are open disjoint neighborhoods $U_{m}$ and $U_{n}$ such that $m\in U_{m}$ and $n\in U_{n}$, but it suffices to show the existence of disjoint basic open sets containing $m$ and $n$. Taking $d=|m-n|$, we can find an integer $t$ such that $t>d$ and such that $\gcd(m,t)=\gcd(n,t)=1$. A way to accomplish this is to take any multiple of $mn$ greater than $d$ and add $1$. The basic open sets $S(m,t)$ and $S(n,t)$ are disjoint, because they have common elements if and only if the diophantine equation $m+tx=n+ty$ has solutions. But it cannot have since $t(x-y)=n-m$ implies that $t$ divides $n-m$ but $t>|n-m|$ makes it impossible. We conclude that $S(m,t)\cap S(n,t)=\emptyset$ and this means that $\mathbbmss{Z}^{+}$ becomes a Hausdorff space with the given topology. ## Some properties of $\overline{S(a,b)}$ We need to determine first some facts about $\overline{S(a,b)}$. in order to take an example, consider $S(3,5)$ first. Notice that if we had considered the former topology (where in $S(a,b)$, $a$ and $b$ didn’t have to be coprime) the complement of $S(3,5)$ would have been $S(4,5)\cup S(5,5)\cup S(6,5)\cup S(7,5)$ which is open, and so $S(3,5)$ would have been closed. In general, in the finer topology, all basic sets were both open and closed. However, this is not true in our coarser topology (for instance $S(5,5)$ is not open). The key fact to prove $\mathbbmss{Z}^{+}$ is not a completely Hausdorff space is: given any $S(a,b)$, then $b\mathbbmss{Z}^{+}=\{n\in\mathbbmss{Z}^{+}:b\mbox{ divides }n\}$ is a subset of $\overline{S(a,b)}$. Indeed, any basic open set containing $bk$ is of the form $S(bk,t)$ with $t,bk$ coprimes. This means $\gcd(t,b)=1$. Now $S(bk,t)$ and $S(a,b)$ have common terms if an only if $bk+tx=a+by$ for some integers $x,y$. But that diophantine equation can be rewritten as $tx-by=a-bk$ and it always has solutions because $1=\gcd(t,b)$ divides $a-bk$. This also proves $S(a,b)\neq\overline{S(a,b)}$, because $b$ is not in $S(a,b)$ but it is on the closure. ## The topology on $\mathbbmss{Z}^{+}$ induced by $\mathbbmss{B}$ is not completely Hausdorff We will use the closed-neighborhood sense for completely Hausdorff, which will also imply the topology is not completely Hausdorff in the functional sense. Let $m,n$ different positive integers. Since $\mathbbmss{B}$ is a basis, for any two disjoint neighborhoods $U_{m},U_{n}$ we can find basic sets $S(m,a)$ and $S(n,b)$ such that $m\in S(m,a)\subseteq U_{m},\qquad n\in S(n,b)\subseteq U_{n}$ and thus $S(m,a)\cap S(n,b)=\emptyset.$ But then $g=ab$ is both a multiple of $a$ and $b$ so it must be in $\overline{S(m,a)}$ and $\overline{S(n,b)}$. This means $\overline{S(m,a)}\cap\overline{S(n,b)}\neq\emptyset$ and thus $\overline{U_{m}}\cap\overline{U_{n}}\neq\emptyset$. This proves the topology under consideration is not completely Hausdorff (under both usual meanings). Title Hausdorff space not completely Hausdorff Canonical name HausdorffSpaceNotCompletelyHausdorff Date of creation 2013-03-22 14:16:05 Last modified on 2013-03-22 14:16:05 Owner drini (3) Last modified by drini (3) Numerical id 21 Author drini (3) Entry type Example Classification msc 54D10 Synonym $T_{2}$ space not $T_{2\frac{1}{2}}$ Synonym example of a Hausdorff space that is not completely Hausdorff Related topic CompletelyHausdorff Related topic SeparationAxioms Related topic FrechetSpace Related topic RegularSpace Related topic FurstenbergsProofOfTheInfinitudeOfPrimes Related topic SeparationAxioms Related topic T2Space
proofpile-shard-0030-118
{ "provenance": "003.jsonl.gz:119" }
## richyw Group Title question about critical points... one year ago one year ago 1. richyw Group Title Hi, I have been unable to find this in my textbook. So say I have $$f(x,y)$$ at the point $$(a,b)$$ and $\frac{\partial f}{\partial x}=\frac{\partial f}{\partial y}=0$ 2. richyw Group Title If I say $\Delta(x,y)=\frac{\partial^2 f}{\partial x^2}\cdot\frac{\partial^2 f}{\partial y^2}-\left(\frac{\partial^2f}{\partial x\partial y}\right)^2$ 3. richyw Group Title then if $\Delta (a,b) > 0\quad \text{and}\quad \frac{\partial^2f}{\partial x\partial y}>0$ I have a relative maximum. And if$\Delta (a,b) > 0\quad \text{and}\quad \frac{\partial^2f}{\partial x\partial y}<0$I have a relative minimum. 4. richyw Group Title If $$\Delta (a,b) < 0$$ I have a saddle point. And If $$\Delta (a,b) = 0$$ I can't draw any conclusions. So I have two questions. The first one (most important) is what if $\frac{\partial^2f}{\partial x\partial y}=0$ Then how do I know if this is a maximum or a minimum? The second question (less important for now), is why does this work!?!? 5. richyw Group Title sorry the mixed partial derivatives are also evaluated at (a,b)
proofpile-shard-0030-119
{ "provenance": "003.jsonl.gz:120" }
# A bar is subjected to a combination of a steady load of 60 kN and a load fluctuating between -10 kN and 90 kN. The corrected endurance limit of the bar is 150 MPa, the yield strength of the material is 480 MPa and the ultimate strength of the material is 600 MPa. The bar cross-section is square with side a. If the factor of safety is 2, the value of a (in mm), according to the modified Goodman’s criterion, is ________ (correct to two decimal places). This question was previously asked in PY 6: GATE ME 2018 Official Paper: Shift 2 View all GATE ME Papers > ## Answer (Detailed Solution Below) 31.60 - 31.65 Free CT 1: Ratio and Proportion 1963 10 Questions 16 Marks 30 Mins ## Detailed Solution Fluctuating load = -10 kN and 90 kN Corrected endurance limit (σe) = 150 MPa Yield strength (σy) = 480 MPa Ultimate strength (σut) = 600 Mpa Bar = Square cross-section with side a FOS = 2 a (in mm) = ?  according to Goodman’s criterion. Goodman criterion: $$\frac{{{\sigma _m}}}{{{\sigma _{ut}}}} + \frac{{{\sigma _a}}}{{{\sigma _e}}} = \frac{1}{{FOS}}$$ Where, $${\sigma _m} = mean\;stress = \frac{{{\sigma _{max}} + {\sigma _{min}}}}{2}$$ $${\sigma _a} = Amplitude\;stress = \frac{{{\sigma _{max}} - {\sigma _{min}}}}{2}$$ Since both static & dynamic loads are acting. Thus, Pmax = Pdynamic + Pstatic = 150 kN Pmin = Pdynamic + Pstatic = -10 + 60 = 50 kN $${\sigma _m} = \frac{{{\sigma _{max}} + {\sigma _{min}}}}{2} = \frac{{\left( {\frac{{150}}{{{a^2}}} + \frac{{50}}{{{a^2}}}} \right)}}{2}$$ $$= \frac{{100}}{{{a^2}}}kN/m{m^2}$$ $${\sigma _a} = \frac{{{\sigma _{max}} - {\sigma _{min}}}}{2} = \frac{{\left( {\frac{{150}}{{{a^2}}} - \frac{{50}}{{{a^2}}}} \right)}}{2}$$ $$= \frac{{50}}{{{a^2}}}kN/m{m^2}$$ Thus, $$\frac{{100 \times {{10}^{ + 3}}}}{{{a^2} \times 600}} + \frac{{50 \times {{10}^3}}}{{{a^2} \times 150}} = \frac{1}{2}$$ $${a^2} = 2 \times {10^3}\left( {\frac{{100}}{{600}} + \frac{{50}}{{150}}} \right)$$ a2 = 1000 a = 31.62 mm
proofpile-shard-0030-120
{ "provenance": "003.jsonl.gz:121" }
# All Questions 3,817 questions Filter by Sorted by Tagged with 2answers 46 views ### Can an end-fed be fed with an Isolation transformer? Typical half-wave end-fed antenna kits include either a 9:1 or 49:1 toroidal balun or unun, which is a transformer for impedance matching, but with one primary winding endpoint (for the coax shield) ... 0answers 24 views ### Isolated linearly loaded antenna? Is there a name or common descriptive term for a linearly loaded dipole or end-fed, where a (near) parallel loading element or elements are not connected (galvanically isolated) to the fed element or ... 1answer 97 views ### Obtaining input/out impedance from S-parameters for a L network This L network intend to transform Z1 to 50ohms. The design was made using a smith chart: The s-parameters plotted (S11 red - S22 green) in the smith chart is done sweeping the frequency from 200MHz ... 2answers 78 views ### Can timing information be used to compress data for LoRa transmission? I have a course challenge. It involves calculating an FFT of an audio signal of 2 combined frequencies (up to 20KHz) then sending it over LoRa through to a fixed receiver. There will be an interferer. ... 1answer 91 views 6answers 229 views ### QRN: How to chase down a solid S9 noise floor on 40m only I am new to this, having just set up my station. I have "very bad" QRN on 40m: S9. Here is the setup: High density suburb with flats and houses Yaesu FT450D with fan dipole at 6m-9m above ... 3answers 2k views ### How can I decode SSTV with only macOS software? Have you used an SSTV decoding application on an up-to-date version of macOS? Which one, and if you built from source, can you share the steps you took to do so? I do not know of a single working ... 4answers 119 views ### What comes first, Voltage or Current? Impedance can be defined as cause / effect or Z = E / I. This means that a difference in voltage potential causes a flow of current, the rate of current flow depending on the impedance. In a dipole ... 3answers 93 views ### Is it possible to boost WIFI signal? [closed] Whilst strictly not radio ham question, it does fall in to the RF area so hopefully this is not too far off topic... I have BT broadband account which entitles me to use BT-WIFI hotspots. The trouble ... 2answers 73k views ### Which HF bands are best during the day and which are better at night? I am rather new to HF operation and have recently purchased my first HF rig. I have read about bands being "open" and "closed" based on various conditions including time of day. As I have been ... 2answers 73 views ### How to figure out my car battery's capacity? I was participating in an emergency radio training and I was asked how long I could operate. I'm using an old car battery, which was sometimes not capable of starting the car in winter, which is why I ... 0answers 65 views ### Legal unlicensed transmitters permitted where? [closed] In what English-speaking countries is it specifically allowed by law or regulation to operate non-commercial unlicensed RF transmitters within certain bands at certain power levels? (such as within ... 1answer 83 views ### Does SWR change along the length of a transmission line? Wikipedia says "SWR is defined as the ratio of the partial standing wave's amplitude at an antinode (maximum) to the amplitude at a node (minimum) along the line". Standing wave voltage ... 3answers 151 views ### Aren't all antennas technically dipoles? Since all antennas require a ground, reference or otherwise, there are two poles for their operation. For a mag mount, the car body is the reference ground. A vertical cannot operate without a ground ... 5answers 283 views ### (When) does a transceiver re-reflect 100% of reflected power? There has been a long discussion of one question here. Phil, W8II and I agreed that it's better to post a separate question. There are several sources (some editions of "The ARRL Handbook", &... 2answers 99 views ### Can a smartphone with single antenna receive signals modulated by different schemes I am working on a system whereby a smartphone may receive signals simultaneously from different sources. As we know, the fading and pathloss will make the transmitters send their traffic with ... 0answers 36 views ### For a resonant half wave dipole, is the phase of the applied RF voltage at the feedpoint in phase with the current reflected from the ends? For a resonant half wave dipole, the voltage and current of the standing waves on the antenna are 90 degrees out of phase with each other. At resonance, is the phase of the current reflected back from ... 1answer 39 views ### The amplifier does not work for hack rf one Good day! I ask for help with repairing the hackrf one amplifier. Doesn't work at all. Accepts signals without amplifier, and LNA and VGA adjustment in SDR sharp works too! But when I put a tick on &... 2answers 99 views ### How much dB gain do you need from an LNA in front of an antenna for great low-signal reception from satellites in the 2m and 70cm bands? We are building a multi-stage LNA for a TR module we are building in the 144MHz and 440MHz range. This LNA we are considering (Qorvo QPL9547) provides about 22dB gain in the 2m and about 27dB in the ... 1answer 100 views ### Choosing a Relay for HF Applications As part of my 20m transceiver build I'm looking for a relay to by-pass an RF amplifier in the front-end board. So, to be clear, that's after the 20m broad bandpass filter and before the RF mixer (it'... 4answers 16k views ### 2m or 70cm FM mobile radio for digital mode operation Are there any 2m and/or 440cm FM mobile radios that have audio inputs for supporting digital modes, like PSK31 or SSTV? I have a Rigblaster that I use with my HF radio so I could get a mike cable and ... 2answers 64 views ### At near-QRP levels should I include a balun/unun in the antenna system? [duplicate] To my comprehension a BALUN/UNUN is an RF Transformer. If this is correct and since a BALUN/UNUN is part of the antenna system ... how much power does one expect to lose there? Specifically, how much ... 0answers 26 views ### How does the inductive reversal process of the emission of EM radiation at the receiving antenna work? The emission of EM radiation is based on an induction process in which a periodic electrical voltage creates a magnetic field that is offset by 90° in time and space, also known as the starting near ... 0answers 29 views ### I'm trying to configure HRD for IC7300 but cannot connect / detect frequency Got the firmware updated. Got the Driver updated. Got the software updated. Went through instructions several times. 2answers 10k views ### How to detect common-mode currents or “RF in the shack”? Let's say we already know that if your antenna is not suitably constructed or connected then you can get current on the shield of your coaxial feed line which will flow through the shielding/grounding ... 1answer 136 views ### Using SW radio to detect Jupiter I read here (in the ideas section) that an old shortwave radio can be used as a receiver for listening to Jupiter but am unsure about how to do it. MY radio has both a whip and ferrite antenna, ... 15 30 50 per page
proofpile-shard-0030-121
{ "provenance": "003.jsonl.gz:122" }
Question # In the figure, there are four arcs carrying positive and negative charges. All of them have same charge density $$\lambda$$. Pick incorrect statement(s). A The net dipole moment for the given charge distribution is (45)λR2. B The resultant electric field at the center is zero. C If a uniform E is switched on perpendicular to the plane, charge distribution starts rotating about X-axis. D Potential at the centre of the given charge distribution is non-zero. Solution ## The correct options are B The resultant electric field at the center is zero. C If a uniform $$\overrightarrow{E}$$ is switched on perpendicular to the plane, charge distribution starts rotating about X-axis. D Potential at the centre of the given charge distribution is non-zero.PhysicsNCERTStandard XII Suggest Corrections 0 Similar questions View More People also searched for View More
proofpile-shard-0030-122
{ "provenance": "003.jsonl.gz:123" }
Chemistry How many moles of potassium iodide are there in 50 grams? Wiki User 50 g potassium iodide is equivalent to 0,3 moles. 🙏🏿 0 🤨 0 😮 0 😂 0 Related Questions How many grams are in 0.02 moles of beryllium iodide? 0.02 moles of beryllium diiodide = 5,256 grams How many moles of iodine are needed to react with 5 moles of potassium to create potassium iodide? Since molecules of potassium contain only single potassium atoms, molecules of iodine contain two atoms, and moles of potassium iodide contain one atom of each element, 2.5 moles of iodine are needed to react completely with 5 moles of potassium. How many grams of oxygen will be in 2 moles of potassium dichromate? 224 grams of Oxygen will be in 2 moles of Potassium dichromate. He formula for potassium dichromate is KËÃCrËÃOËÈ How many grams of oxygen will be in 2 moles of potassium dichromate? The formula for potassium dichromate is K&Euml;&Atilde;Cr&Euml;&Atilde;O&Euml;&Egrave;. How many grams of oxygen will be in 2 moles of potassium dichromate? How many moles in potassium bicarbonate? You did not describe the amount of potassium bicarbonate amount in grams in your question. But if you are about 1 gram of potassium bicarbonate it will be 0.0099 moles in one gram of potassium bicarbonate. 0.0199 moles in 2 grams of potassium bicarbonate. How many moles of potassium are in 156.4 grams? Atomic mass of potassium, K = 39.1 Amount of potassium is 156.4g sample = 156.4/39.1 = 4.00mol There are 4 moles of potassium in 156.4 grams of potassium. How many grams are there in 3.3 moles of potassium sulfide? 3.3 moles of potassium sulfide is equal to 363,86 g. How many moles are in K? There are 39.0983 grams in one mole of K (potassium). a mole is a number. you cannot ask how many moles are in potassium. but you may ask how many moles of a certain substance are in potassium. How many grams of potassium sulfate are there in 25.3 moles? 25,3 moles of potassium sulfate hva a mass of 4,4409 kg. How many moles of potassium chloride KCl are there in 50 grams? 50 g potassium chloride is equivalent to 0,67 moles. The formula for potassium dichromate is K2Cr2O7 How many grams of oxygen will be in 2 moles of potassium dichromate? In one mole of potassium dichromate, there seven moles of oxygen. This means in two moles of K2Cr2O7, there are 14 moles of O, or 7 Moles of O2, which equals 224 grams. How many moles are in 100.0g of potassium iodide? Chemical formula for potassium iodide is KI. Its formula mass is 39.1+127.0=166.1 Amount of KI in 100.0g sample = 100.0/166.1 = 0.602mol Potassium iodide how many atoms? Potassium iodide (KI) has two atoms: potassium and iodine. How many grams are there in 1 moles of potassium atoms? The molar mass of potassium is 39,0983(1) g. How many atoms of potassium are there in one unit of potassium iodide? There is one atom of potassium in a unit of potassium iodide. How many moles in 284 grams of potassium? Potassium has atomic number 39.1.Amount of K in 284g sample = 284/39.1 = 7.26molThere are 7.26 moles of potassium in a 284g sample. Still have questions? Trending Questions What times 10 equals to 1000? Asked By Wiki User How old is Danielle cohn? Asked By Wiki User Previously Viewed
proofpile-shard-0030-123
{ "provenance": "003.jsonl.gz:124" }
# Orbits in elementary, power-law galaxy bars – 1. Occurrence and role of single loops Orbits in elementary, power-law galaxy bars – 1. Occurrence and role of single loops Abstract Orbits in galaxy bars are generally complex, but simple closed loop orbits play an important role in our conceptual understanding of bars. Such orbits are found in some well-studied potentials, provide a simple model of the bar in themselves, and may generate complex orbit families. The precessing, power ellipse (p-ellipse) orbit approximation provides accurate analytic orbit fits in symmetric galaxy potentials. It remains useful for finding and fitting simple loop orbits in the frame of a rotating bar with bar-like and symmetric power-law potentials. Second-order perturbation theory yields two or fewer simple loop solutions in these potentials. Numerical integrations in the parameter space neighbourhood of perturbation solutions reveal zero or one actual loops in a range of such potentials with rising rotation curves. These loops are embedded in a small parameter region of similar, but librating orbits, which have a subharmonic frequency superimposed on the basic loop. These loops and their librating companions support annular bars. Solid bars can be produced in more complex potentials, as shown by an example with power-law indices varying with radius. The power-law potentials can be viewed as the elementary constituents of more complex potentials. Numerical integrations also reveal interesting classes of orbits with multiple loops. In two-dimensional, self-gravitating bars, with power-law potentials, single-loop orbits are very rare. This result suggests that gas bars or oval distortions are unlikely to be long-lived, and that complex orbits or three-dimensional structure must support self-gravitating stellar bars. galaxies: kinematics and dynamics 1 INTRODUCTION The idea that galaxy bars are built on a skeleton of simple closed orbits, elongated along the bar (i.e. the x1 family), and fleshed out by similar orbits with constrained librations (Lynden-Bell 1979), is conceptually simple, and popular. Athanassoula (2013) states, ‘The bar can then be considered as a superposition of such orbits,..., which will thus be the backbone of the bar’. Images of nested sets of such orbits derived from numerical models reinforce that idea (see Athanassoula 1992; Binney & Tremaine 2008). Nonetheless, there are very few analytic models of stellar bars and oval distortions in galaxies, composed of simple, nested orbits (see Contopoulos 2002; Binney & Tremaine 2008). This is unfortunate since such models could facilitate the study of bars, and advance our understanding of them. Furthermore, much of the study of orbits in numerical simulations has focused on the special cases of fixed potentials with bars of Ferrers (e.g. Athanassoula 1992) or Freeman type (Freeman 1966a,b,c). Williams & Evans (2017) point out that, because of their homogeneous density profiles, these models are not especially realistic. These latter authors study a family of very different models where the bar components are represented by thin, dense needles. Indeed, they opine that ‘models of bars... remain rather primitive’ and ‘there is ample scope for the development of new models...’. In their models of weak bars, Williams & Evans (2017) do find that, as in the classic picture, simple, loop (type x1, x4) dominate. However, their models of strong bars are dominated by much more complex ‘propeller’ orbits. Families of complex and chaotic orbits are found in many numerical simulations with either fixed or self-consistent potentials (e.g. Sellwood & Wilkinson 1993; Ernst & Peters 2014; Manos & Machado 2014; Jung & Zotos 2015; Valluri et al. 2015; Gajda, Łokas & Athanassoula 2016), and those families are likely to be just as important a constituent of bars as simple loop orbits. Thus, the question arises of whether the classic picture of bars as nested loop orbits has any great relevance beyond special cases or illustrative toy models? On the other hand, gas clouds in bars cannot pursue complex orbits without generating shocks and strong dissipation. Gas may be quickly expelled from strong bars dominated by complex orbits, but may play an important role in weak or forming bars. Thus, beyond generalizing the Williams & Evans (2017) models, it would be useful to know when, and in what potentials simple, nested, loop orbits can dominate the bar. Galaxies, or even limited radial regions in galaxy discs, have a wide range of potential forms. It would be useful to find relationships between the structure of potentials (symmetric and asymmetric) and the orbit types, especially simple orbit types, that they support. This is another area where our knowledge ‘remain(s) rather primitive’. In this paper I will undertake a modest exploration of this territory by studying the simplest orbits in simple power-law potentials. More realistic potentials may be decomposed into sums of power-law potential approximations, and we may expect that individual terms in these sums will bring their corresponding orbits into regions where they dominate. There are several ways to study closed orbits in bars (see Bertin 2000; Binney & Tremaine 2008). Perhaps, the most direct method is to seek them in numerical models (e.g, Contopoulos & Grosbol 1989; Athanassoula 1992; Miwa & Noguchi 1998). A second method leverages action-angle variables in a perturbation formalism, which in limiting cases fits well with the epicyclic orbit approximation (e.g. Lynden-Bell 1979; Sellwood & Wilkinson 1993; Binney & Tremaine 2008; Sellwood 2014). In this paper we will use a related method, analytic (p-ellipse) orbit approximations in a perturbation approach. In Struck (2006) it was shown that a precessing power-law ellipse (p-ellipse) approximation is quite accurate up to moderate eccentricities in a wide range of power-law potentials. There are other good approximations available, e.g. the Lambert W function discussed in (Valluri et al. 2012), but p-ellipses are especially simple. In a later work (Struck 2015a) it was found that with simple modifications, i.e. to the precession frequencies, p-ellipse approximations can also approximate high-eccentricity orbits remarkably well. Because of this frequency modification there is a continuum of Lindblad resonances parametrized by eccentricity for highly eccentric orbits. Ensembles of eccentric resonant orbits of different sizes, excited impulsively, could have equal precession periods and make up the backbone of kinematic bars or spiral arms with constant pattern speeds in symmetric halo potentials (2015b). This idea motivated the work below, but we will see that in many power-law potentials with a bar component, nearly radial, single-loop orbits in the bar frame either do not exist or are very small. The (Struck 2015b) paper did not address the question of whether these or other simple closed orbits also exist in potentials with a fixed non-axisymmetric component, e.g. due to a prolonged tidal component or an oval or bar-like halo. To use approximate p-ellipse orbits to investigate this, it must first be shown when, or under what conditions, p-ellipses can approximate orbits in non-axisymmetric gravitational potentials. It will be demonstrated below that in the case of simple, closed loop orbits the answer is the same as in the case of symmetric potentials – the p-ellipse approximation is again quite accurate up to moderate eccentricities in a wide range of potentials (Section 4). It will also be shown by example that in the immediate neighbourhood of resonant loop orbits, there exist other orbits that are very similar, but modestly librating (Section 4). On average, these orbits can also be described by p-ellipses, and ultimately may be more completely approximated by p-ellipse with added frequencies to represent the libration (see Section 3.2). The parameter space near the simple resonant loop is evidently densely populated with closed versions of these librating orbits, and they might be used to form the backbone of a model bar (Section 5). This suggestion is much as proposed by Lynden-Bell (1979). see also Contopoulos & Mertzanides (1977), and Lynden-Bell (1996) for discussions of resonant orbits and bar formation. (Lynden-Bell 1979, in an appendix, also describes a wider range of orbits that would fit into his formalism.) We will see in Sections 4 and 5 that many potentials with symmetric and barred components represented by single power laws do not have more than one closed (m = 2) loop orbit. Even when accompanied by their librating family of nearby orbits, we would only expect hollow, annular stellar bars to exist in these cases. A potential consisting of multiple power-law parts, each dominating in successive annular ranges, can produce a nested series of closed orbits, and their librating companions. This can make a more robust bar (see Section 5 and Fig. 8). The excitation of resonant orbits by tidal disturbances or asymmetric haloes may generate self-gravitating bars or waves (Noguchi 1987, 1988; Barnes & Hernquist 1991). It is not clear, however, that as the bars grow to non-linearity, and acquire significant self-gravity, whether the simple, closed orbits will continue to exist, or whether they can be arranged to form a stable, self-gravitating bar, i.e. whether the Poisson equations, as well as the equations of motion, can be approximately solved by ensembles of simple, loop orbits and modestly librating orbits. In fact, we will see in Section 6 that the simple planar, p-ellipse approximation at second order has very few solutions with the additional Poisson constraints. As discussed in the final two sections, these results imply that long-lived, self-consistent bars or oval distortions cannot have a substantial gas component, because there would be strong dissipation. When such bars are made of stars, essentially all orbits must librate, or have complex multi-loop forms, as seen in published simulations. 2 BASIC EQUATIONS AND P-ELLIPSE APPROXIMATIONS 2.1 Basic equations In this work we only consider orbits in the two-dimensional central plane of a galaxy disc, and generally adopt a symmetric, power-law, halo potential of the form,   \begin{eqnarray} \Phi = \frac{-GM_{\epsilon }}{2{\delta }\epsilon } \left(\frac{\epsilon }{r} \right) ^{2\delta }. \end{eqnarray} (1)In addition, we will include a non-axisymmetric (bar) part of the potential of the simple form,   \begin{eqnarray} \Phi _b = \frac{-GM_{\epsilon }}{\epsilon } \left(\frac{\epsilon }{r} \right) ^{2\delta _b} {e_b} \text{cos} \left( 2(\phi - \phi _o) + \Omega _b t \right), \end{eqnarray} (2)where r and ϕ are the radial and azimuthal coordinates, respectively, in the disc, eb is an amplitude parameter of the asymmetric potential, δ and δb give the radial dependence of the symmetric and non-axisymmetric potentials, and Ωb is the rotation frequency of the latter. The scale length is ε and Mε is the halo mass contained between the radius r = ε and some minimum radius. The above is a very simple form for a bar potential, with relatively few parameters, and no characteristic length (e.g. cut-off radius). Then the equations of motion for stars orbiting in the disc with the adopted potentials are   \begin{eqnarray*} \ddot{r} &=& \frac{-GM_{\epsilon }}{\epsilon ^2} \left( \frac{\epsilon }{r} \right) ^{1+2\delta } - 2{\delta _b}e_b \frac{GM_{\epsilon }}{\epsilon ^2} \left( \frac{\epsilon }{r} \right) ^{1+2\delta _b}\nonumber\\ &&\times \cos {\left( 2(\phi - \phi _o) + \Omega _b t \right)} + r \dot{\phi }^2, \end{eqnarray*}   \begin{eqnarray} \ddot{\phi } + \frac{2\dot{r}\dot{\phi }}{r} &=& -\frac{1}{r^2} \frac{\mathrm{\partial} \Phi }{\mathrm{\partial} \phi } = -2e_b \frac{GM_{\epsilon }\Omega _b}{\epsilon ^3} \left( \frac{\epsilon }{r} \right)^{2+2\delta _b}\nonumber\\ &&\times \sin {\left( 2(\phi - \phi _o) + \Omega _b t \right)}. \end{eqnarray} (3) Next, we derive dimensionless forms of these equations by substituting the dimensionless (overbar) variables and dimensionless constants defined as   \begin{eqnarray} \bar{r} = r/\epsilon , \ \bar{t} = t/\tau , \ c = \frac{GM_\epsilon \tau ^2}{ \epsilon ^3}, \ c_b = c e_b. \end{eqnarray} (4)For additional simplification we will set the value of the time-scale to $$\tau ^{-2} = \frac{GM_\epsilon }{\epsilon ^3}$$, so that c = 1.0. Despite this choice, we will carry the c factor through much of the analysis below for clarity. Then the dimensionless equations of motion are   \begin{eqnarray*} \ddot{\bar{r}} = -c \bar{r}^{-\left( {1+2\delta }\right)} - 2{\delta _b}c_b \bar{r}^{-\left( {1+2\delta _b} \right)} \cos {\left( 2(\bar{\phi } - \phi _o) + \Omega _b t \right)} + \bar{r} \dot{\bar{\phi }}^2, \end{eqnarray*}   \begin{eqnarray} \ddot{\bar{\phi }} + \frac{2\dot{\bar{r}}\dot{\bar{\phi }}}{\bar{r}} = -2c_b \Omega _b \bar{r}^{-\left( {2+2\delta _b} \right)} \sin {(\left( 2(\bar{\phi } - \phi _o) + \Omega _b t \right)}. \end{eqnarray} (5)Henceforth we will omit the overbars and assume all variables are dimensionless. We will also assume that the initial value of the azimuth (ϕo) is zero. The next step towards a more workable set of equations is to go into a reference frame rotating with the bar or pattern speed, Ωb. In this frame the dimensionless radii are the same, and in terms of the previous values, the azimuthal coordinates are ϕ΄ = ϕ − Ωbt. We will henceforth drop the primed notation, so that the equations of motion in the rotating frame are   \begin{eqnarray*} \ddot{r} = -c r^{-\left( {1+2\delta }\right)} - 2{\delta _b}c_b r^{-\left( {1+2\delta _b} \right)} \cos {(2\phi )} + r \left( \dot{\phi } + \Omega _b \right)^2, \end{eqnarray*}   \begin{eqnarray} \ddot{\phi } + \frac{2\dot{r} \left( \dot{\phi } + \Omega _b \right)}{r} = -2c_b r^{-\left( {2+2\delta _b} \right)} \sin (2\phi ) \end{eqnarray} (6)(see e.g. Binney & Tremaine 2008, equations 3.135a,b). 2.2 p-ellipse approximations As described in the Introduction section, we seek approximate solutions of these equations, of the form,   \begin{eqnarray} \frac{1}{r} = \frac{1}{p} \left[ 1 + e \cos \left( m{\phi } \right) \right]^{\frac{1}{2} + \delta }, \end{eqnarray} (7)which were studied in Struck (2006, Paper 1), named precessing, power-law ellipses, or ‘p-ellipses’, and found to be quite accurate despite their simplicity (for other approximations, see Valluri et al. 2012). Here the orbital scale is given by the semi-latus rectum p, m is a frequency ratio, and e is the eccentricity parameter. Note that while the form of equation (7) is that same as in Struck (2006), and subsequent p-ellipse papers, the physical meaning of the m parameter is different in the rotating coordinate system, though still a function of the ratio of precession and orbital frequencies. In the following we will focus on the case where this solution is in resonance with the bar driving force, i.e. with m = 2. If such solutions can provide accurate approximations, as in the case of symmetric potentials, then they demonstrate continuity with orbits of the purely symmetric part of the potential (since parameters from the bar potential are not included). They might also provide a useful tool for studying orbit transformation in the process of bar formation. However, it is not a priori clear how well the p-ellipse approximation will work for orbits that change their angular momenta over orbital segments (conserving it only over the whole period in the case of closed resonant orbits). Generally, equation (7) only yields closed or open-precessing loop forms except at high eccentricity. As detailed in Struck (2015a), in the case of nearly radial orbits, the addition of a harmonic term (in cos(2mϕ)) to equation (7) significantly improves the accuracy of the orbit approximation. We will not pursue this refinement in this paper, and to keep the algebra manageable, will generally neglect harmonic terms in the perturbation analyses below. However, it has been clear since the early work of Lynden-Bell (1979) that classes of orbits in bars can be described as liberating ovals. Analytic approximation of these forms requires more than a single frequency. Thus, we will explore the equations with a second frequency term (m) to get an approximate solution of the form,   \begin{eqnarray} \frac{1}{r} &=& \frac{1}{p} \left[ 1 + e \cos \left( m{\phi } \right) + c_2 e \cos \left( 2{\phi } \right) + c_x e^2 \cos \left( (2-m){\phi } \right) \right]^{\frac{1}{2} + \delta }\!\!\!\!\!\!\!\!\!,\nonumber\\ \end{eqnarray} (8)The new frequency is m (here redefined and ≠ 2), and the final term in square brackets must be included since such factors will be generated by cross terms in the equations of motion, so the solution must contain terms to balance them. The value of the frequency m may be close to 2. In such cases, the frequency 2 − m will generally have a small value, and can approximate a subharmonic of the driving frequency. This can generate liberating, near resonant loop approximations to numerical orbits, as well as more complex forms. 3 PERTURBATION ANALYSES In this section we develop the p-ellipse approximations to the orbits satisfying equations (6), and derive the corresponding relations between the orbital parameters. Assuming that the orbits are not radial, so the eccentricity e is a relatively small parameter, we can expand in that parameter. While the radial equation above has zeroth-order terms, all the terms in the azimuthal equation are of first order and higher. For reasonable accuracy up to moderate eccentricities, we carry out the expansion to second order. We consider the two approximate solutions given by equations (7) and (8) separately in the following two subsections. Readers not interested in the details of these calculations may proceed to Section 4. 3.1 Single-frequency case In this subsection we consider the perturbation expansion of the resonant solution equation (7). The second-order approximations to that p-ellipse solution and its first two time derivatives are   \begin{eqnarray} &&{\frac{r}{p} \simeq \ 1 - \left(\frac{1}{2}+\delta \right) e\ \text{cos}(2\phi)} \nonumber\\ &&\quad \quad \quad + \,\frac{1}{2} \left(\frac{1}{2}+\delta \right) \left(\frac{3}{2}+\delta \right) e^2 \text{cos}^2(2\phi ), \end{eqnarray} (9)  \begin{eqnarray} \frac{1}{p} \frac{\text{d}r}{\text{d}t} &=& \frac{\dot{r}}{p} \simeq\nonumber\\ &&\left[ \left(\frac{1}{2} + \delta \right) 2e\ \text{sin}(2\phi ) \left[ 1 - \left(\frac{3}{2}+\delta \right) e\ \text{cos}(2\phi ) \right] \right] \dot{\phi },\nonumber\\ \end{eqnarray} (10)  \begin{eqnarray} \frac{\ddot{r}}{p} &\simeq& \left\lbrace \left(\frac{1}{2} + \delta \right) \left(\frac{3}{2} + \delta \right) 4 e^2 + \left(\frac{1}{2} + \delta \right) 4 e \text{cos}(2\phi ) \right.\nonumber\\ &&\left. -\, \left(\frac{1}{2} + \delta \right) \left(3 + \delta \right) 4 e^2 \text{cos}^2(2\phi ) \right\rbrace \dot{\phi }^2 \nonumber\\ &&+\, \left(\frac{1}{2} + \delta \right) 2 e\ \text{sin}(2\phi ) \ddot{\phi }. \end{eqnarray} (11)In the last of these equations, an additional term in $$e^2 \ddot{\phi }$$ was dropped on the assumption that $$\ddot{\phi }$$ is itself of first or higher order. This assumption will be confirmed below. This equation still contains one term in $$\ddot{\phi }$$. Substituting the expressions above for r and $$\dot{r}$$ into the second of equations (6), we obtain a first-order approximation for $$\ddot{\phi }$$,   \begin{eqnarray} \ddot{\phi } \simeq -4 \left(\frac{1}{2} + \delta \right) \left( \dot{\phi } + \Omega _b \right) \dot{\phi } e\ \text{sin}(2\phi ) - \frac{2c_b}{p^{2(1+\delta _b)}} \text{sin}(2\phi ), \end{eqnarray} (12)where we assume that cb is comparable to or less than e. This equation can then be substituted into equation (11), and the resulting form substituted for in the first of equations (6). The following approximations of the power-law terms can also be substituted.   \begin{eqnarray} \left(\frac{p}{r} \right)^{1+2\delta } \simeq 1 &+& 2 \left(\frac{1}{2} + \delta \right)^2 e\ \text{cos}(2\phi )\nonumber\\ &+& \left(\frac{1}{2} + \delta \right)^2 \left[ 2\left(\frac{1}{2} + \delta \right)^2 - 1 \right] \ e^2\ \text{cos}^2(2\phi ),\nonumber \\ \end{eqnarray} (13)  \begin{eqnarray} &&{\left( \frac{p}{r} \right)^{2(1+\delta _b)} \simeq 1 + 2 \left(\frac{1}{2} + \delta \right) \left(\frac{1}{2} + \delta _b \right) e\ \text{cos}(2\phi )}\nonumber\\ &&+ \,\left(\frac{1}{2} + \delta \right) \left(\frac{1}{2} + \delta _b \right) \left[ 2\left(\frac{1}{2} + \delta \right) \left(\frac{1}{2} + \delta _b \right) - 1 \right]\nonumber\\ && \times \,\ e^2\ \text{cos}^2(2\phi), \end{eqnarray} (14) After substituting equations (9)–(14), the radial equation in equation (6) yields a second-order expression for $$\dot{\phi }^2$$, which is equivalent to that obtained from angular momentum conservation in symmetric potentials. We can generally approximate this variable, like the radius, in powers of e cos(2ϕ),   \begin{eqnarray} \dot{\phi } \simeq f_o + f_1 e \text{cos}(2\phi ) + f_2 e^2 \text{cos}^2 (2\phi ), \end{eqnarray} (15)where fi are constant coefficients. With this final substitution, the radial equation yields a constraint equation at each order of ecos(2ϕ). The equation derived from the constant terms is   \begin{eqnarray} &&{4 \left( \frac{1}{4} - \delta ^2 \right) e^2 f_o^2 -8 \left( \frac{1}{2} + \delta \right) ^2 e^2 \Omega _b f_o} \nonumber\\ &-& 4 e e_b \left( \frac{1}{2} + \delta \right) q_b = -q + \left( f_o + \Omega _b \right)^2. \end{eqnarray} (16)where we use the following, simplifying change of variables,   \begin{eqnarray} q = c p^{-2(1+\delta )}, \ \ q_b = c p^{-2(1+\delta _b)}. \end{eqnarray} (17)Equation (16) can be viewed as a quadratic in the coefficient fo. Then the equation derived from first-order terms, i.e. terms in ecos(2ϕ), can be solved for the coefficient f1. It is,   \begin{eqnarray} 2 \left( f_o + \Omega _b \right) f_1 &=& 4 \left( \frac{1}{2} + \delta \right) f_o^2 + 2 \left( \frac{1}{2} + \delta \right)^2 q\nonumber\\ &&+\, 2 \delta _b \frac{e_b}{e} q_b + \left( \frac{1}{2} + \delta \right) \left( f_o + \Omega _b \right)^2. \end{eqnarray} (18)The equation in terms in e2cos2(2ϕ) can be solved for the coefficient f2,   \begin{eqnarray} 2 \frac{\left( f_o + \Omega _b \right)}{\left( \frac{1}{2} + \delta \right)} f_2 &=& 8f_o^2 + 10f_o f_1 + 8\left( \frac{1}{2} + \delta \right) f_o \Omega _b\nonumber\\ &&+\, 2f_1 \Omega _b - \frac{1}{2} \left( \frac{3}{2} + \delta \right) \left( f_o + \Omega _b \right)^2 - \frac{f_1^2}{\left( \frac{1}{2} + \delta \right)} \nonumber\\ && +\, \left( \frac{1}{2} + \delta \right) \left[ 2 \left( \frac{1}{2} + \delta \right)^2 - 1 \right] q\nonumber\\ && +\, 4 \left[ 1 + \frac{1}{2}\delta _b + \delta _b^2 \right] \frac{e_b}{e} q_b. \end{eqnarray} (19)This completes the set of three equations for the fi coefficients. However, we still need to use the azimuthal equation (6) to solve for the p-ellipse variables, p and e. To begin, equation (15) can be differentiated to obtain an expression for the second derivative, $$\ddot{\phi }$$, which is   \begin{eqnarray} \ddot{\phi } \simeq -2 f_o f_1 e\ \text{sin}(2\phi ) -2 \left( f_1^2 + 2 f_o f_2 \right) e^2 \text{sin}(2\phi ) \text{cos}(2\phi ). \end{eqnarray} (20)This equation and equations (9), (10), (14), and (15) can then be substituted into the azimuthal equation of motion (6) to obtain the perturbation constraint. In this azimuthal equation we retain only terms of first and second orders in e, and after cancellation of a common factor of e sin(eϕ), these appear as terms of zeroth and first orders. This is confusing since the radial equation has true zeroth-order terms giving the balance of gravitational and centrifugal forces when e, eb = 0. Thus, we will continue to refer to these azimuthal equation terms as the first- and second-order conditions. The first-order condition reduces to the following,   \begin{eqnarray} f_o f_1 = 2 \left(\frac{1}{2} + \delta \right) f_o \left( f_o + \Omega _b \right) + \frac{e_b}{e} q_b. \end{eqnarray} (21)The second-order equation (ecos(2ϕ) terms in the azimuthal equation) is   \begin{eqnarray} f_1^2 + 2f_o f_2 &=& 2 \left( \frac{1}{2} + \delta \right)\nonumber\\ &&\times \left[ -\left( f_o + \Omega _b \right) f_o \left( 2f_o + \Omega _b \right) f_1 \right. \nonumber \\ && + \left. \left( 1 + \delta _b \right) \frac{e_b}{e} q_b \right]. \end{eqnarray} (22)This completes the set of coefficient equations derived from the equations of motion in this resonant case. The equations are linear in the variables q, qbeb/e, and of quadratic order or less in the fi factors. Thus, to second order in any disc region with fixed values of δ, δb, there are generally zero to two resonant p-ellipse orbits. This important result is evidently due to the fact that the ratio of epicyclic/precession frequency to circular orbit frequency is a constant in power-law potentials. We will consider specific solutions in the following section. 3.2 Two-frequency case In this subsection we consider second-order solutions to the equations of motion (equations (6)) of the form of equation (8). The perturbation expansion procedure is essentially the same as that of the previous subsection. For brevity, we will not give the equations analogous to equations (9)–(14) above. In this case the angular velocity expansion form, analogous to equation (15), is   \begin{eqnarray} \dot{\phi } &\simeq& f_o + f_1 e \text{cos}(m\phi ) + f_2 e \text{cos}(2\phi )\nonumber\\ &&+\, f_3 e^2 \text{cos}^2 (m\phi ) + f_4 e^2 \text{cos}^2 (2\phi ) + f_5 e^2 \text{cos}^2 ((2-m)\phi ). \end{eqnarray} (23) Then, the coefficient equations deriving from the radial equation are analogous to equations (16)–(19), except that there are now six of them. The first is the equation derived from the constant terms,   \begin{eqnarray} &&{\left( \frac{1}{2} + \delta \right) \left( \frac{3}{2} + \delta \right) \left( m^2 + 4c_2^2 \right) e^2 f_o^2 - 4 \left( \frac{1}{2} + \delta \right) c_2 e e_b q_b}\nonumber\\ &-&2 \left( \frac{1}{2} + \delta \right)^2 f_o \left( \Omega _b + f_o \right) \left( m^2 + 4c_2^2 \right) e^2 = -q + \left( f_o + \Omega _b \right)^2, \nonumber\\ \end{eqnarray} (24)The equation from the e cos(mϕ) terms is   \begin{eqnarray} 2 \left( f_o + \Omega _b \right) f_1 &=& \left( \frac{1}{2} + \delta \right) m^2 f_o^2 + 2 \left( \frac{1}{2} + \delta \right)^2 q\nonumber\\ &&+ \,2 \delta _b \frac{e_b}{e} q_b + \left( \frac{1}{2} + \delta \right) \left( f_o + \Omega _b \right)^2. \end{eqnarray} (25)The equation from the e cos(2ϕ) terms is   \begin{eqnarray} 2 \left( f_o + \Omega _b \right) f_2 &=& 4 \left( \frac{1}{2} + \delta \right) c_2 f_o^2 + 2 \left( \frac{1}{2} + \delta \right)^2 c_2 q\nonumber\\ && +\, 2 \delta _b \frac{e_b}{e} q_b + \left( \frac{1}{2} + \delta \right) c_2 \left( f_o + \Omega _b \right)^2. \end{eqnarray} (26)The first second-order equation from the e2 cos2(mϕ) terms is   \begin{eqnarray} 2 \left( f_o + \Omega _b \right) f_3 &=& 2 \left( \frac{1}{2} + \delta \right) m^2 f_o f_1 \nonumber\\ &&-\,2 \left( \frac{1}{2} + \delta \right) \left( \frac{3}{2} + \delta \right)m^2 f_o^2 + 2 \left( \frac{1}{2} + \delta \right)^2\nonumber\\ &&\times\, m^2 f_o \left( f_o + \Omega _b \right)+ \left( \frac{1}{2} + \delta \right)^2\nonumber\\ &&\times\, \left[ 2 \left( \frac{1}{2} + \delta \right)^2 - 1 \right] q - f_1^2\nonumber\\ &&-\, \frac{1}{2} \left( \frac{1}{2} + \delta \right) \left( \frac{3}{2} + \delta \right)\nonumber\\ &&\left( f_o + \Omega _b \right)^2 + 2 \left( \frac{1}{2} + \delta \right) f_1 \left( f_o + \Omega _b \right). \end{eqnarray} (27)The equation from the e2 cos2(2ϕ) terms is   \begin{eqnarray} \frac{2}{c_2} \left( f_o + \Omega _b \right) f_4 &=& \left( \frac{1}{2} + \delta \right) f_2 \left( 9 f_o + \Omega _b \right)\nonumber\\ &&-\, 8 \left( \frac{1}{2} + \delta \right) \left( \frac{3}{2} + \delta \right) {c_2} {f_o}^2\nonumber\\ &&+\, 8 \left( \frac{1}{2} + \delta \right)^2 {c_2} f_o \left( f_o + \Omega _b \right)\nonumber\\ &&+\, 4 \left( \frac{1}{2} + \delta \right) \left( \frac{3}{2} + \delta \right) \frac{e_b}{e} q_b\nonumber\\ &&+\, \left( \frac{1}{2} + \delta \right)^2 \left[ 2 \left( \frac{1}{2} + \delta \right)^2 - 1 \right]{c_2} q\nonumber\\ &&-\, \frac{1}{2} \left( \frac{1}{2} + \delta \right) \left( \frac{3}{2} + \delta \right) {c_2} \left( f_o + \Omega _b \right)^2. \end{eqnarray} (28)And the equation from the e2 cos2((2 − m)ϕ) terms is   \begin{eqnarray} &&{2 \left( f_o + \Omega _b \right) f_5}\nonumber\\ &=&- \left( \frac{1}{2} + \delta \right) \left[ \left( \frac{3}{2} + \delta \right) \frac{c_2}{2} - c_x \right] \left( f_o^2 + \left(f_o + \Omega _b \right)^2 \right)\nonumber\\ &&-\, \frac{m}{2} \left( \frac{1}{2} + \delta \right) \left[ 8 \left( \frac{1}{2} + \delta \right) c_2 f_o \left( f_o + \Omega _b \right) + 2 \frac{e_b}{e} q_b \right]\nonumber\\ &&+\, \frac{1}{2} \left( \frac{1}{2} + \delta \right)^2 \left[ \frac{c_2}{2} \left( \left( \frac{1}{2} + \delta \right)^2 - 1 \right) + c_x \right] q\nonumber\\ &&+\, \frac{\delta _b}{2} \left( \frac{1}{2} + \delta \right) \left( \frac{1}{2} + \delta _b \right) \frac{e_b}{e} q_b\nonumber\\ &&+\, \left( \frac{1}{2} + \delta \right) \left( f_2 + c_2 f_1 \right) \left( f_o + \Omega _b \right) + f_1 f_2\!\!\!\!\!\!\!\! . \end{eqnarray} (29)As in the previous case, most of these equations can be used to obtain values of the fi coefficients in equation (23). To proceed, we differentiate the quantity $$\dot{\phi }^2$$, derived from equation (23) to get,   \begin{eqnarray} &-&\ddot{\phi } \simeq f_o \left[ mf_1 e\ \text{sin}(m\phi ) + 2f_2 e\ \text{sin}(2\phi ) \right]\nonumber\\ &+& \left( f_1^2 + 2 f_o f_3 \right) m e^2 \text{sin}(m\phi ) \text{cos}(m\phi )\nonumber\\ &+& 2 \left( f_2^2 + 2 f_o f_4 \right) e^2 \text{sin}(2\phi ) \text{cos}(2\phi )\nonumber\\ &+& \frac{2-m}{2} \left( f_1 f_2 + 2 f_o f_5 \right) e^2 \text{sin}((2-m)\phi ) \text{cos}((2-m)\phi ). \end{eqnarray} (30) This equation can be used to eliminate the $$\ddot{\phi }$$ term in the angular equation of motion, as in the previous case. Then we obtain five coefficient equations by gathering like terms in this equation. The first of these is obtained from the me sin(mϕ) terms,   \begin{eqnarray} f_1 = 2 \left(\frac{1}{2} + \delta \right) \left( f_o + \Omega _b \right). \end{eqnarray} (31)The equation from the 2e sin(2ϕ) terms is   \begin{eqnarray} f_o f_2 = 2 \left(\frac{1}{2} + \delta \right) c_2 f_o \left( f_o + \Omega _b \right) + \frac{e_b}{e} q_b. \end{eqnarray} (32)The equation from the me2 sin(mϕ)cos(mϕ) terms is   \begin{eqnarray} 2f_o f_3 = 2 \left(\frac{1}{2} + \delta \right) \left( f_o + \Omega _b \right) \left( 2f_1 - f_o \right) - f_1^2. \end{eqnarray} (33)The equation from the 2e2 sin(mϕ)cos(mϕ) terms is   \begin{eqnarray} 2f_o f_4 &=& 2 \left(\frac{1}{2} + \delta \right) c_2^2 \left( f_o + \Omega _b \right) \left( 2f_2 - f_o \right) - f_2^2\nonumber\\ &&+\, \frac{c_2}{2} \left(\frac{1}{2} + \delta \right) \left(\frac{1}{2} + \delta _b \right) \frac{e_b}{e} q_b. \end{eqnarray} (34)And the equation from the e2 sin((2 − m)ϕ) terms is   \begin{eqnarray} &&{(2-m) f_o f_5 = 2 \left(\frac{1}{2} + \delta \right) \left( f_o + \Omega _b \right)}\nonumber\\ &&{\left[ -mf_2 + 2 c_2 f_1 + (2-m) \left( -\frac{c_2}{2} + c_x \right) f_o \right]}\nonumber\\ &-& \frac{2-m}{2} f_1 f_2 + c_2 \left(\frac{1}{2} + \delta \right) \left(\frac{1}{2} + \delta _b \right) \frac{e_b}{e} q_b. \end{eqnarray} (35) These five equations from the azimuthal equation, together with the six from the radial equation (equations (24)–(29)), complete the set needed to solve for the parameters fo − f5, m, p, e, c2, and cx of the approximate solution given by equations (8) and (23). 4 APPROXIMATE LOOP ORBIT SOLUTIONS In this section we explore solutions to the perturbation equations of Section 3.1 based on the simple p-ellipse of equation (7). These solutions may be parents of families of orbits in non-self-gravitating galaxy bars, which are driven by an external potential, as discussed below. The external potential may due to a prolonged tidal perturbation, or a bar-like dark halo. 4.1 A very simple special case Solutions to the single-frequency cases discussed in Section 3.1 are determined by the coefficient equations (16), (18), (19), (21), and (22). We note that the sum fo + Ωb is a common term in these equations, and in the case where fo = −Ωb the equations are simplified significantly. This is the case we consider in this subsection. We note that this case has no special physical significance. The factor fo is the mean rotation frequency of the star in the pattern frame, and there is no obvious reason for it to equal the opposite of the pattern frequency. However, this simple case suggests a simple analytic solution strategy, which can be generalized. This strategy makes use of the fact that if we assume the value of one of the unknowns (fo), then we can treat the factor (eb/e)qb as an unknown variable, even though it is actually a combination of the variables e, p, and the presumably known potential amplitude eb. We are inverting the direct problem of finding to fo to ask what value of eb is needed to get the assumed value of fo. Then, the solution is obtained via the following procedure. First, use equations (21) and (18) to eliminate the variables f1 and (eb/e)qb, respectively, from equation (19). The latter is a quadratic that gives q in terms of δ, δb, and fo. The value of q yields the value of p and qb, and the remaining solution parameters are obtained directly from the other equations. We require the solution for q to be a positive, real number. In this special case, the quadratic solutions are imaginary or negative for a large range of parameter values. Even when there is a real, positive solution (or two), other physical constraints must be satisfied, e.g. 0 ≤ e ≪ 1. A range of parameter values have been explored, and relatively few physical solutions have been found in this case. 4.2 More general closed loop orbits Fortunately, when we deviate from the special case of the previous subsection (fo = −Ωb), we find more physical solutions to the coefficient equations, i.e. closed loop orbits. We consider several examples in this subsection. Nonetheless, we will follow the same procedure for solving the coefficient equations as in the previous subsection. Specifically, we will adopt a value of the pattern speed, Ωb, and a value of the mean orbital speed of the star, fo, as some multiple of the former. In principle, we could adopt values of the bar parameters, Ωb, eb (and δb), and then solve for the solution parameters e, p, fi. However, as discussed above, the solution is easier to obtain if we assume a value of fo and derive the corresponding value of eb. In the following two subsections we consider some specific examples of physically relevant solutions. 4.2.1 Slowly rising rotation curve examples The first sequence of examples has relatively slowly rising rotation curves appropriate to the inner part of a galaxy disc in both the symmetric part of the potential (with δ = −0.3) and the asymmetric part (with δb = −0.2). We also adopt the (arbitrary) value of Ωb = 0.315, and consider a range of values of fo and the ratio nb = −Ωb/fo. For a first example we take fo = −0.3 and nb = 1.05; the solution of the coefficient equations then yields: p = 9.56, e = 0.61, and eb = 5.68. All of these solution values are relatively large, so we might not expect the perturbation approximation to be very accurate in this case. Fig. 1 compares the p-ellipse approximation with these parameters to a numerically integrated orbit with the same initial conditions in the pattern frame. That is, the initial conditions are ϕ = 0, dr/dt = 0, with r given by the p-ellipse equation and dϕ/dt given by a value like that of equation (15), with c = 1. It is apparent that the two orbits are very similar. A small subharmonic modulation (four times the fundamental period) is visible in the numerical orbit in the lower panel, which presages a trend we will see more of below. This modulation is also responsible for the finite thickness of the numerical orbit curve in Fig. 1. Figure 1. View largeDownload slide A sample orbit determined by the parameter value fo = −0.3 and in the potential specified by the values δ = −0.3, δb = −0.2 and pattern speed Ωb = 0.315, in the dimensionless units. In both panels the blue solid curve is the result of numerically integrating the equations of motion with the given initial conditions (see the text for details), and the red-dashed curve is the p-ellipse approximation. The upper panel is the view on to the disc in the pattern frame; the lower panel shows radius versus azimuthal advance, which is negative in this case. Figure 1. View largeDownload slide A sample orbit determined by the parameter value fo = −0.3 and in the potential specified by the values δ = −0.3, δb = −0.2 and pattern speed Ωb = 0.315, in the dimensionless units. In both panels the blue solid curve is the result of numerically integrating the equations of motion with the given initial conditions (see the text for details), and the red-dashed curve is the p-ellipse approximation. The upper panel is the view on to the disc in the pattern frame; the lower panel shows radius versus azimuthal advance, which is negative in this case. While the fit of the analytic to the numerical curve in Fig. 1 is impressive, there is an important caveat. In the previous paragraph, the initial angular velocity used in the numerical orbit was described as ‘like that of equation (15)’. Although the value predicted by equation (15) is $$\dot{\phi } = -0.49$$, while the value that yields the good fit is (15) is $$\dot{\phi } = -0.87$$. Thus, the analytic equation for $$\dot{\phi }$$ does not yield an accurate approximation for the values of e as large as the present example. This is understandable, since in the present case the analytic prediction is that the terms of equation (15) are (fo, ef1, 0.5e2f2) = ( − 0.3, −0.51, 0.16). Clearly, the series on the right-hand side of equation (15) is not converging rapidly with the relatively large value of e. This slow $$\dot{\phi }$$ convergence is a limitation on the p-ellipse approximation, but it is a predictable consequence of large values of e. Note that the more accurate value for the numerical orbit was found by trial and error, and so too in most of the examples below. Nonetheless, this first example shows that a p-ellipse approximation, developed for orbits in symmetric potentials, can also fit rather flattened orbits in two-part, asymmetric potentials quite well. One major difference between the adopted p-ellipse solution and those used for symmetric potentials (see Struck (2006)) is that we have changed the frequency ratio to a fixed resonant value, rather than the value representing the precession of the given symmetric potential. A second orbital example, specified by fo = −0.3029, nb = 1.04, and the same pattern frequency (Ωb = 0.315), and with derived analytic values of p = 5.0076, e = 0.9542, and eb = 3.2069 is shown in Fig. 2. In this high eccentricity case, the predicted value of $$\dot{\phi }$$ was −0.32, and the fitted value, −1.19. Although the value of fo is only slightly changed from the previous orbit, this orbit is much smaller and more elongated. In fact, physical orbit solutions can generally only be found over a small range of fo values. Figure 2. View largeDownload slide Like Fig. 1, but for the orbit determined by the parameter value fo = −0.3029, again in the δ = −0.3, δb = −0.2 potential with pattern speed Ωb = 0.315. The blue solid curve shows the numerically integrated orbit in both panels. The red-dashed curve in the upper panel is the analytic orbit. The green-dotted curve in the lower panels is the analytic curve, but corrected as described in the text. The lowest panel shows the azimuthal velocity in the pattern frame as a function of radius. Note that the azimuthal velocity is negative, and the speed is higher at small radii. Figure 2. View largeDownload slide Like Fig. 1, but for the orbit determined by the parameter value fo = −0.3029, again in the δ = −0.3, δb = −0.2 potential with pattern speed Ωb = 0.315. The blue solid curve shows the numerically integrated orbit in both panels. The red-dashed curve in the upper panel is the analytic orbit. The green-dotted curve in the lower panels is the analytic curve, but corrected as described in the text. The lowest panel shows the azimuthal velocity in the pattern frame as a function of radius. Note that the azimuthal velocity is negative, and the speed is higher at small radii. Two differences from the previous example are evident in the top panel of Fig. 2. First, the fit is not as good. This is not surprising given the high eccentricity. (Note that the flattening at a given p-ellipse eccentricity differs from that of a simple ellipse; see Struck 2006.) Secondly, the numerical orbit is thicker, a result of stronger subharmonic modulation, which is evident in the lower panel of Fig. 2. As Lynden-Bell (2010) found for elliptical orbits, and (2015a, see equation 12) confirmed for p-ellipses, a given orbit can be approximated much more accurately by using the eccentricity derived from the values of its inner and outer radii, rather than that derived as above. In the present case, once we have found a closed numerical orbit that best agrees with the p-ellipse approximation, we can use its extremal radii to get a better estimate of the eccentricity (here e = 0.825). This latter approach is used in the lower panel of Fig. 2, and the result is an excellent fit despite the high eccentricity (except for the subharmonic modulation). The difference between these two examples was a slight increase of the parameter fo in the second case. If we increase fo further (but still slightly), we find the trends described above continue. That is, the numerical orbits tend to get a little smaller and more eccentric, but the p-ellipse approximation tends to exaggerate the eccentricity by greater amounts, unless corrected as just described. If instead we decrease the (negative) value of fo from the value used in the first example, then the trends reverse. That is, we get bigger orbits, that are less eccentric, but also tend to have higher subharmonic modulation; they are thicker. This last trend continues up to the point that the (numerical) orbits change their shape altogether. A third example shows the nature of this shape change with the present potentials. This case, shown in Fig. 3, has fo = −0.2990, nb = 1.0535, and the same pattern frequency as the previous examples. The derived analytic values are p = 15.79, e = 0.4301, and eb = 8.8691. The predicted value of $$\dot{\phi }$$ was −0.4973, and the fitted value, −0.7142. In this case, the resonant orbit has three distinct loops, which cannot be fit by a single-loop p-ellipse. However, the p-ellipse approximation appears as a low-radius boundary, and provides a fit only in the vicinity of the lowest radial excursion. That is a general result for such multi-loop orbits. The basic peak and trough phases of the numerical model are also captured by the analytic model. This example suggests that the subharmonic component is becoming much stronger as we decrease the value of fo. Figure 3. View largeDownload slide Like Fig. 1, but for the orbit determined by the parameter value fo = −0.2990, again in the δ = −0.3, δb = −0.2 potential with pattern speed Ωb = 0.315. The blue solid curve shows the numerically integrated orbit, and the red dashed curve in the upper panel is the analytic orbit in both panels. The sub-harmonic modulation is clear in the lower panel. Figure 3. View largeDownload slide Like Fig. 1, but for the orbit determined by the parameter value fo = −0.2990, again in the δ = −0.3, δb = −0.2 potential with pattern speed Ωb = 0.315. The blue solid curve shows the numerically integrated orbit, and the red dashed curve in the upper panel is the analytic orbit in both panels. The sub-harmonic modulation is clear in the lower panel. Slight variations in the initial angular velocity yields a family of orbits that are similar, but with thicker loops. With enough initial velocity deviation the three-loop form disappears, and the orbits are better described as a filled annulus between the p-ellipse and the outer loop. It is also true that initial velocity variations produce thicker loops in the previous examples. Thus, each closed, resonant orbit has a family of such offspring, extending over a finite interval of the initial value of $$\dot{\phi }$$. Fig. 4 shows a fourth example, with a still larger value of fo = −0.2986(nb = 1.0549), and the same pattern frequency. In this case, the derived analytic values are p = 22.90, e = 0.3305, and eb = 12.32. The predicted value of $$\dot{\phi }$$ was −0.4788, and the fitted value, −0.6120. Again, we see a three loop pattern, with the analytic orbit serving as a low radius boundary to the numerical orbit. The subharmonic frequency is longer relative to the analytic period than in the previous example, and their combined width is greater. It is clear that the centre of each loop is offset from the origin, in alternating directions along the x-axis. If we increase the value of fo to −0.2984, the analytic approximation breaks down entirely, yielding negative values of p, the radial scale. Between the current value of fo and that critical value, the overall orbit size and the spread between loops continues to get larger. The analytic eccentricity, and that of the loops, decreases. The offset of loop centres also increases. Figure 4. View largeDownload slide Like Figs 1 and 3, but for the orbit with fo = −0.2986. The orbit is similar to that in Fig. 3, but larger and broader. The lower panel shows that the subharmonic frequency is becoming dominant. Figure 4. View largeDownload slide Like Figs 1 and 3, but for the orbit with fo = −0.2986. The orbit is similar to that in Fig. 3, but larger and broader. The lower panel shows that the subharmonic frequency is becoming dominant. Given the large values of eb in these last two or three examples, a quantity assumed to be of order e in the perturbation expansion, it is not surprising that the analytic approximation breaks down. It is more surprising that it continues to provide some information as it breaks down. In sum, all of these sample orbits have the same pattern speed, so they, and their less regular offspring, could be combined to make a model bar. However, this model would require increasing bar strength (eb) with increasing radius. We might not expect the bar to extend beyond the radius where the orbit breaks into multiple loops, because the pattern becomes less distinct and more circular. Gaseous components would experience dissipation and circularization at loop meeting points. This radius apparently differs from the co-rotation radius usually thought to give the outer extent of bars. Alternately, if the needed bar strength occurs only over a limited range of radii, then a hollow annular stellar bar would be possible. In this latter case, if the value of eb changed with time the bar structure would evolve. For example, it could grow larger and wider as eb increases. 4.2.2 More steeply rising rotation curve examples In this subsection we consider a second set of examples drawn from a potential with a more steeply rising rotation curve, indeed close to a solid-body potential. Specifically, we take δ = −0.8 in the symmetric part of the potential and δb = −0.7 in the asymmetric part. The case with δ = −0.5 is a singular one, where the perturbation approach above breaks down. The character of the orbits is rather different for δ values on either side of this critical value. One difference is that the resonant, closed orbits generally only exist at higher pattern speeds than in the previous case. For the examples in this section, we take Ωb = 1.05. We will consider three orbits in this bar pattern. The first, shown in Fig. 5, is a large, but low eccentricity one, specified by the parameter value fo = −0.77. The numerical orbit has a small, but finite width, and some subharmonic modulation is visible in the lower panel. Not surprisingly, the analytic fit to this nearly circular orbit is very good. Figure 5. View largeDownload slide A sample orbit determined by the parameter value fo = −0.77(f1 = 1.025, f2 = 0.464) and in the potential specified by the values δ = −0.8, δb = −0.7 and pattern speed Ωb = 1.05, in the dimensionless units. In both panels the blue solid curve is the numerically integrated orbit, and the red dashed curve is the p-ellipse approximation. The upper panel is the view on to the disc in the pattern frame; the lower panel shows radius versus the negative azimuthal advance. Note the large size, and near circularity of the orbit. Figure 5. View largeDownload slide A sample orbit determined by the parameter value fo = −0.77(f1 = 1.025, f2 = 0.464) and in the potential specified by the values δ = −0.8, δb = −0.7 and pattern speed Ωb = 1.05, in the dimensionless units. In both panels the blue solid curve is the numerically integrated orbit, and the red dashed curve is the p-ellipse approximation. The upper panel is the view on to the disc in the pattern frame; the lower panel shows radius versus the negative azimuthal advance. Note the large size, and near circularity of the orbit. We skip over a range of large, low eccentricity orbits to the much smaller, and visibly flatter one in Fig. 6. Though flatter, this orbit does not pinch inward like the analytic approximation. The fit is not good, but we can improve it using the maximum and minimum radii of the numerical orbit to derive new values of e and p, as described for the Fig. 2 orbit. The revised fit is good and shown in the lower panels as a green dotted curve, which basically overwrites the blue numerical curve in the region of overlap. The thickness of the numerical orbit is larger relative to its mean radius than that of the orbit of Fig. 5, but it is still small. Figure 6. View largeDownload slide Like Fig. 5, but for the orbit with fo = −0.715(f1 = 1.007, f2 = 0.540) in the same potential, with the same pattern speed. The blue solid curve shows the numerically integrated orbit in both panels. The red-dashed curve in the upper panel is the analytic orbit, which does not yield a good fit in this case. The green-dotted curve in the lower panels is the analytic curve, corrected with a lower eccentricity and as further described in the text. Figure 6. View largeDownload slide Like Fig. 5, but for the orbit with fo = −0.715(f1 = 1.007, f2 = 0.540) in the same potential, with the same pattern speed. The blue solid curve shows the numerically integrated orbit in both panels. The red-dashed curve in the upper panel is the analytic orbit, which does not yield a good fit in this case. The green-dotted curve in the lower panels is the analytic curve, corrected with a lower eccentricity and as further described in the text. The value of the parameter fo is changed only slightly between the cases shown in Figs 6 and 7, but the effect is significant. Specifically, the orbit is beginning to get much thicker (and this trend accelerates for yet larger values of fo). The fit of the analytic orbit is worse than in Fig. 6. In this orbit, and others not shown with lower values of fo, the analytic curve approximates a portion of the inner boundary of the numerical orbit (as in Figs 3 and 4). While the orbits get wider as fo is decreased, their inner boundary gets smaller and flatter only slowly. The lower panel of Fig. 7 shows that the subharmonic modulation is becoming dominant as in the more extreme examples of the previous subsection. In fact, two subharmonics are visible, one at twice the basic frequency, as well as the lower frequency one. Figure 7. View largeDownload slide Like Fig. 6, but for the orbit with fo = −0.71 in the same potential, with the same pattern speed. In the lower panel the analytic orbit has been omitted for clarity. The subharmonic pattern dominates. Figure 7. View largeDownload slide Like Fig. 6, but for the orbit with fo = −0.71 in the same potential, with the same pattern speed. In the lower panel the analytic orbit has been omitted for clarity. The subharmonic pattern dominates. 4.3 Generalizations It is interesting to compare the orbit sequences of the last two subsections. The flattest orbits are the smallest in both cases, with larger orbits becoming more nearly circular. In none of the cases above are the orbits even close to the flatness apparent in some observed bars. Interestingly, this statement does not apply to the analytic bars of the case in Section 4.2.1. Fig. 2 hints at how these can get much longer and flatter than the corresponding ‘true’ numerical orbit. This, along with other results described below and in the literature, suggests that if closed orbits play a significant role in flat bars, they are not simple loops. If that is true, then it is also likely that shocks in the gas also play an important role in such bars. The two sequences above also share the characteristic that at one end of the range of allowable fo values the subharmonic modulation becomes very strong, and the relative width of the orbit grows as fast as or faster than the relative change in the mean radius. The two sequences differ, however, in which end of the spectrum shows this phenomenon. In the first sequence (Section 4.2.1), it occurs at the smaller values of fo, where the orbits are large and more circular. In the second sequence (Section 4.2.2), it occurs at larger values of fo, where the orbits are smaller and flatter. This behaviour reversal seems to occur generally across the δ = −0.5 singularity, based on additional cases not presented here. It should also be noted that there appears to be a large region in the (δ, δb, Ωb) parameter space where simple closed loop orbits do not exist. The cases with low pattern speeds, and values of δ, δb < −0.5, have already been mentioned. This also seems to be true for falling rotation curves with values of δ, δb > 0.0. We have not explored this parameter space extensively, so these conclusions are preliminary. The conclusion that bars are more likely in regions with rising rotation curves, an extrapolation of these loop orbit results, does seem in accord with observational and modelling results (e.g. Sellwood 2014). On the other hand, loop-like orbits are found numerically in falling rotation curve potentials in parameter regions where the perturbation approximation fails to give solutions. This will be explored in a sequel paper. 5 LOOP BARS IN GENERAL POTENTIALS In the previous section we considered examples of sets of closed loop orbits in symmetric and bar-like power-law potentials. In the case of a single value of the bar potential amplitude, eb in the disc, there is generally one such orbit, though zero to two closed loop orbits are allowed by the perturbation theory in Section 3.1. However, the numerical results of the last section indicate that one such orbit is the most common result, and this result extends to quite high eccentricity. If the magnitude of eb increases outward, there can be a number of nested near-loop orbits, which could form the skeleton of a bar. However, the potentials of galaxy discs are not well described by a single power law in radius, but rather have rising rotation curves in the inner parts and flat or falling rotation curves in the outer regions. In this section we consider another series of loop orbits in a potential with varying power-law indices, as an example of a potential approximated by a sum of power laws. The main point of this example is to show that results like those of the previous section can be generalized to potentials that can be approximated by such variable power-law forms. Fig. 8 shows four representative orbits in this example. The assumed rotation curve rises moderately steeply in the inner regions, but transitions to a flat rotation curve in the outer regions. The equipotentials are also assumed to be of about the same shape as the loop orbits. The values of the potential indices from the inner orbit outwards are δ = δb = ( − 0.25, −0.20, −0.15, 0.0). That is, we assume that local power-law potential approximations to the potential have these index values and that δ = δb throughout. The value of the pattern speed is Ωb = 0.315. From the inside out, the values of (p, e, eb) for the analytic approximations to the inner three orbits are (4.11, 0.826, 1.97), (5.19, 0.495, 2.53), (6.36, 0.306, 3.31), and (7.03, 0.108, 4.89). Figure 8. View largeDownload slide A sequence of four closed orbits with the same pattern speed, Ωb = 0.315. The three inner bar-like orbits derived by numerical integration are shown by thick, blue curves, and their analytic approximations are shown by thin, red curves. The fourth, outermost dotted orbit consists of three loops, and no analytic approximation is shown. The initial conditions $$(r_o, \dot{\phi _o})$$ are (3.53, −1.23), (4.60, −1.01), (5.80, −0.837), and (6.68, −0.657). Figure 8. View largeDownload slide A sequence of four closed orbits with the same pattern speed, Ωb = 0.315. The three inner bar-like orbits derived by numerical integration are shown by thick, blue curves, and their analytic approximations are shown by thin, red curves. The fourth, outermost dotted orbit consists of three loops, and no analytic approximation is shown. The initial conditions $$(r_o, \dot{\phi _o})$$ are (3.53, −1.23), (4.60, −1.01), (5.80, −0.837), and (6.68, −0.657). Despite their varying eccentricity, the three nested innermost orbits represent a clear bar structure. Between them and the outermost orbit there will be orbits with increasing thicknesses (and decreasing eccentricities), like those of Figs 3 and 4. As the potential index decreases towards δ = 0.0 with increasing radius, it becomes harder to get single-loop orbits. Multiple (but not necessarily three) looped orbits become the rule at large radius. The one shown in Fig. 8 is particularly interesting because it represents a set derived from a small range of initial radii ro that have an inner loop, which partially overlaps a significant part of one of the simple inner loops. On the other hand, the outer loop of this orbit is nearly circular. A star traversing the relevant part of the inner loop would, in some sense, look like it was pursuing a simple bar orbit. However, on the outer loop the star would look like it was on a circular orbit well outside the bar. Such orbits do not seem to have been studied in the bar literature (see e.g. the reviews of Athanassoula 2013; Sellwood 2014), though orbits with small loops at their ends are common. (However, in a recent paper Christodoulou & Kazanas 2017 find similar orbits in spherical potentials.) We note that it is a natural extension of the analytic approximation (via varying the value of fo) that leads to them. As in the examples of the previous section, such closed, resonant orbits have a family of nearby (in the space of initial conditions) orbits that do not close. Evidently, their non-circularity would introduce a significant component of apparent velocity dispersion into the outer disc, beyond that of small bar perturbations of near circular orbits of large radii. They seem worthy of further study. In sum, though the example discussed in this section is ad hoc, it shows that a loop orbit skeleton of a model bar can be constructed in a potential more complex than that consisting of monotonic power laws in both symmetric and asymmetric parts. The well-studied Ferriers bars provide more examples (see Athanassoula 1992). It also shows, as mentioned above, such a model bar effectively ends as the rotation curve becomes flat. This does not appear to be directly related to a co-rotation radius. 6 THE POSSIBILITIES FOR SELF-CONSISTENT LOOP ORBIT BARS The gravitational potentials considered in the previous section were fixed, and presumably of external origin. We sought simple, closed, bar-like orbits in those potentials that could be analytically approximated with p-ellipses to second order in the eccentricity parameter. In this section we ask whether the bar potential itself could be constructed from such orbits? We will see that there are complications, and such bars are likely to be rare or non-existent, at least in two dimensions. 6.1 Poisson equation and constraints To construct bar potentials from loop orbits, we use the two-dimensional Poisson equation, which can be written in dimensionless units as   \begin{eqnarray} \triangledown ^2 \Phi = \frac{1}{r} \frac{\mathrm{\partial} }{\mathrm{\partial} r} \left( r \frac{\mathrm{\partial} \Phi }{\mathrm{\partial} r} \right) + \frac{1}{r} \frac{\mathrm{\partial} }{\mathrm{\partial} \phi } \left( \frac{1}{r} \frac{\mathrm{\partial} \Phi }{\mathrm{\partial} \phi } \right) = 4\pi \rho , \end{eqnarray} (36)where the scale factors are as in equation (4), with addition of the following scales for the potential and density,   \begin{eqnarray} \Phi _\epsilon = c \frac{\epsilon ^2}{\tau ^2}, \ \ \rho _\epsilon = \frac{M_\epsilon }{\epsilon ^3}. \end{eqnarray} (37)However, in the limit that bar does not have a significant effect on the halo potential, these two parts of the potential decouple, and the Poisson equation above can be assumed to describe the bar alone. Then we substitute the asymmetric potential term from equation (2) to get the following expression for the mass density,   \begin{eqnarray} 4\pi \rho = \frac{4ce_b}{r^{2(1+\delta _b)}} \left( 1 - \delta _b^2 \right)\text{cos}(2\phi ), \end{eqnarray} (38) For this simple example, we assume a stationary bar, Ωb = 0.0. We assume that we can construct the density field of the bar with an appropriate radial distribution with nested p-ellipse orbits (in the rotating frame). Then we can use the p-ellipse equation to eliminate factors of r in the above. To second order, the expression for ρ becomes   \begin{eqnarray} \rho = a_o\ e\text{cos}(2\phi ) \left[1 + a_1 e \text{cos}(2\phi ) \right], \end{eqnarray} (39)with   \begin{eqnarray} a_o = \frac{c(1 - \delta _b^2)}{\pi p^{2(1+\delta _b)}} \frac{e_b}{e}, \ \ a_1 = 2(1+\delta _b)\left( \frac{1}{2}+\delta \right). \end{eqnarray} (40)If e varies slowly with radius, then the radial dependence of the density is given by the $$p^{2(1+\delta _b)}$$ term. To relate the angular dependence of the density contributed by set of adjacent orbits to the azimuthal velocity on the orbits, we can assume that the more time a star spends on a given part of its p-ellipse orbit the greater its contribution to the density at that azimuth. Specifically, assume that the relative density change compared to that at ϕ = 0 on a given part of the orbit equals the opposite of the relative azimuthal velocity change. That is,   \begin{eqnarray} \frac{\rho (\phi ) - \rho _{\phi = 0}}{\rho _{\phi = 0}} &=& -\left( \frac{\dot{\phi }({\phi }) - \dot{\phi }_{\phi = 0}}{\dot{\phi }_{\phi = 0}} \right),\nonumber\\ or, \ \frac{\rho }{\rho _{\phi = 0}} &=& 2 - \frac{\dot{\phi }}{\dot{\phi }_{\phi = 0}} \end{eqnarray} (41)Now we can substitute equation (15) for $$\dot{\phi }$$, equate our two expressions for the density (equations (39) and (41)), and identify terms of common order in ecos(2ϕ). This yields, the following zeroth-, first-, and second-order equations, after some manipulation,   \begin{eqnarray} &&{f_o = 2ef_1 - 2e^2f_2,}\nonumber\\ &&{\frac{c(1 - \delta _b^2)}{\pi p^{2(1+\delta _b)}} \frac{e_b}{e} = \rho _{\phi = 0} \left( \frac{-f_1}{f_o + ef_1 + e^2f_2} \right),}\nonumber\\ &&{ (1+\delta _b)\left( \frac{1}{2}+\delta \right) = \frac{f_2}{2f_1}.} \end{eqnarray} (42)These equations provide strong additional constraints for self-gravitating loop bars. The kinematic orbits, discussed in previous sections, were not so constrained. Free parameters included eb (or fo), δb, δ and the pattern speed Ωb, though closed loops generally only existed for isolated values of fo. As discussed in the next subsection, the extra constraints eliminate most of these solutions. 6.2 Self-consistent loop orbit bars? Already, in the non-self-gravitating cases above, we found large areas of parameter space with no physical orbit solutions to the perturbation equations, and the lack of closed loops in the parameter space neighbourhood was confirmed with numerical orbit integrations. With the additional constraints from the Poisson equation, the regions of parameter space with loop orbits seem to be very small. This is evident just from the first of equations (42), which provides another relation between the fi coefficients of the azimuthal velocity (and e). For example, we can follow the procedure of the previous sections to determine a value of fo iteratively for given values of δb, δ, and Ωb that yields a loop orbit if it exists. Then the values of f1, f2 and the eccentricity parameter are also determined. However, the odds that these values incidentally satisfy the first of equations (42) will generally be very small. This consideration alone eliminates most of the loop solutions of the previous sections. The second of equations (42) can be viewed as an expression for the density variation along the ϕ = 0 axis, and so, does not constrain the solutions. The third equation of the set is constraining in several ways. The first is that since it gives a relation between δ and δb, so one free parameter is eliminated. Loop orbits were found previously only for certain values of those two variables (for a given pattern speed), and those solutions which do not happen to also solve this third condition will be eliminated. This constraint is not as stringent as that imposed by the first equation, since a wide range of δ, δb values produce loops. This third equation also provides some more detailed constraints. Recall that the physical range for the values of δ and δb is about −1.0 to 0.5. The value of the factor 1 + δb in that equation is never negative over this range of δb values. On the other hand, the values of f1 and f2 can have either sign. For example, in the case shown in Figs 1–4, where δ, δb > −0.5, f1 < 0 and f2 > 0. With these values the left side of the third equation is positive and the right negative, so solutions like those of Figs 1–4 cannot satisfy the Poisson constraints. We must have δb < −0.5 when f2/f1 < 0. The orbits shown in Figs 5–7 provide another example with δ, δb < −0.5. Here both f1 and f2 have positive values, so with these values the third equation is again violated, and more orbit solutions are precluded. Although we only have examples, not a rigorous proof, it appears that if the values of δ and δb are close to each other, then the Poisson constraint is violated. A third example is when δb < −0.5. Then the factor (1 + δb) will be small, and since the ratio f2/f1 is often of order unity, the factor f2/(2(1 + δb)f1) is likely to be of order unity or larger. Yet if δ > −0.5, then (0.5 + δ) is less than unity (unless the value of δ is near 0.5), so the Poisson constraint is not satisfied. (We have found cases like this, that yield kinematic loop orbits, but did not describe them above.) A more systematic analysis of the various cases could be done, but these several examples make the point that the third of equations (42) is very constraining. Thus, we conjecture that a self-consistent bars can be constructed from loop orbits alone, in two dimensions, only in rare, or in no cases. Evidently, the orbits in self-consistent bars must be more complex in two dimensions. Moreover, it appears likely that the restrictions above would also apply to loop orbits with small librations; these are also not sufficiently complex. The restrictions above may, however, be loosened in a three-dimensional bar. Consider a very simple example of a cylindrical bar consisting of loops parallel to the disc plane, but not all lying within that plane. For example, suppose their vertical distribution was described by an exponential term, $$e^{-|z|/z_o}$$, in the density and potential. The z-derivatives of the three-dimensional Poisson equation would introduce $$z_o^2$$ terms in the last two constraint equations (42). Then the last of these constraints could be viewed as an equation for this new parameter. If zo was a function of radius, then more parameters, describing this variation, would be introduced. These additional parameters would allow a broader range of solutions, and at least in principle, allow for cylindrical loop bars. We will not explore this topic further here. 7 RAMIFICATIONS FOR TIDAL AND GASEOUS BARS The possibility of generating bars in flyby galaxy collisions is natural because both the tidal force and the bar have the same basic symmetry. Noguchi (1987, 1988) first investigated this with numerical hydrodynamical simulations (also see Barnes & Hernquist 1991). The simulations of Gerin, Combes, & Athanassoula (1990) demonstrated that the process can either strengthen or weaken pre-existing bars (also see Miwa & Noguchi 1998). Moetazedian et al. (2017) and Zana et al. (2018) showed that interactions with small companions can result in delayed bar formation. Berentzen et al. (2004) found that bars regenerated in stellar discs, but not in dissipative discs. Instead the gas was efficiently funnelled to the central regions. Thus, modelling to date suggests that the formation of long-lived stellar bars can be triggered in interactions, but not gaseous bars. What about gaseous bars in isolated, but bar-unstable discs? Several published high-resolution simulations in the literature partially address this question. For example, the models of Mayer & Wadsley (2004) produce small, eccentric, long-lived, gas bars, contained within larger stellar bars. The images of Finali et al. (2015) and Spinoso et al. (2017) suggest similar results, i.e. very small and weak gas bars, though even these do not appear as long-lived as in Mayer & Wadsley (2004). Finali et al. (2015) report that gas is emptied rapidly in a dead zone between co-rotation and inner Lindblad resonances, making it hard to feed nuclear activity at later times. These high-resolution results accord with several of the findings above for bars made of nested loop orbits. Specifically, that nested, non-intersecting orbits can exist in asymmetric external potentials, but that the most eccentric resonant orbits are relatively small, and that the strength of the asymmetric part of the external potential is relatively large. The decay of these model gas bars also agrees with the result that self-gravitating bars are unlikely to be stable and long-lived. The model of Renaud et al. (2015) shows a younger bar than in most previous published simulations. The gaseous part of this bar consists of a thin elliptical annulus, with spiral-like waves on the inside and outside. The inner spirals meet the annular bar near the minor access of the latter. The morphology of this model gas bar suggests that it might consist of a group of orbits like those in Fig. 9, with crossings of between orbits outside a very narrow range of parameters resulting in spiral waves. To return to the case of interaction induced bars, the orbital results of the previous sections provide a basis for the following picture of tidal bar evolution. First, in a prolonged prograde encounter, the large-scale coherence of the perturbing potential may excite nested eccentric resonant orbits with a common pattern speed, like those in Fig. 8. As explained in Section 5 the detailed structure of these orbits will depend on both the shape of the symmetric potential in the disc and the perturbing potential. In the gas, dissipation in nearby, non-resonant orbits may synchronize with the resonant orbit. The dissipation resulting from orbits much different than the resonant ones will drive radial flows, e.g. inside and outside an annular bar. When the companion galaxy leaves, or merges without substantially disrupting the bar, the external asymmetric force disappears, but the stars on resonant orbits may maintain a kinematic bar for some time. On the other hand, if the magnitude of the external potential was substantial, then its disappearance will perturb the orbits. If the bar has significant self-gravity, then it will have more crossing orbits, and dissipation in any remaining gas will aid its dissolution. 8 SUMMARY AND CONCLUSIONS One major theme of this paper centres on the questions of when, or in what potentials, simple, closed loop orbits exist over a range of radii, as they evidently can in the well-studied Ferrer's potentials. When this is the case there exists a dense, nested ensemble of such orbits that could themselves serve as a simple model of the bar. Moreover, these loop orbits generally will be parents of more complex orbit families, as functions of the orbit parameters. A second theme concerns the usefulness of scale-free, power-law potentials, viewed as elementary forms from which more complex potentials could be constructed. The orbit structure of these simple potentials is less complex than many of those used in numerical simulations. In the above, we used p-ellipse functions to find and approximate the simplest orbits in these potentials, in a perturbation limit. The p-ellipse approximation was originally found to provide very good fits to precessing, eccentric orbits in symmetric power-law potentials, as these fits do not drift with time (Struck 2006), since the approximation tracks the orbital precession. It is reasonable to expect that the approximation might also be useful for fitting resonant loop orbits (i.e. closed orbits in the pattern frame) in potentials with a modest asymmetric component, like a bar potential. Since in such orbits the ratio of the precession and pattern frequencies are rational numbers, it seems that any resonant orbit should have a modified precession rate that could be captured by a p-ellipse approximation. This conjecture was shown to be correct in Section 4, where closed orbits in the pattern frame were found to be well approximated by p-ellipses (see Figs 1, 2, and 5). Good fits were found in potentials with both shallow and steeply rising rotation curves. Differences and similarities between the orbits in these two cases were discussed in Section 4.3. One of the most interesting results is that p-ellipse, loop (m = 2) orbits are not predicted to exist in (power-law) falling rotation curve potentials by the perturbation analysis. However, loop orbits are found numerically in such potentials. The second-order p-ellipse solutions of Section 3 seem to fail in these potentials. This phenomenon will be explored in a later paper. Even in the case of rising rotation curve potentials, the exploratory calculations of Section 4 show complexities. Fig. 1 shows the closed, resonant orbit in a sample case. Fig. 2 shows that even a small deviation from the initial conditions of Fig. 1 yields a smaller, unclosed, and more eccentric orbit. Although it cannot capture the small librations of this orbit, the p-ellipse approximation is still a good fit to the mean orbit. Figs 3 and 4 show that further small variations in the initial conditions yield progressively larger, and rounder orbits. These orbits are characterized by clear subharmonic frequencies. The p-ellipse approximation can only capture the innermost loops of these orbits. Given the proximity to the resonant orbit, the subharmonics may be the result of ‘beating’ between the precession and pattern frequencies. These subharmonics may also relate to the β frequency in the epicyclic perturbation analysis of Binney & Tremaine (2008, section 3.3). The initial conditions around the simple closed loop are dense with resonances between the subharmonic and primary orbital frequency, which produce the closed multi-loop orbits like those in Figs 3 and 4. Multi-frequency p-ellipse approximations to the closed, multi-loop orbits, using the equations of Section 3.2, will be investigated in a future paper. Although the resonant, multi-loop orbits cannot be modelled by simple, p-ellipses, solutions to the p-ellipse constraint equations guide us to good estimates of their initial conditions by simply varying the parameter fo. Resonant orbits from a small region of parameter space, like those in Figs 1–4, can be combined to produce a model bar in the given potential. This provides a different technique to the more traditional one of constructing orbits via perturbation ellipses around Lagrange points in a given potential (see Binney & Tremaine 2008, section 3.3). Figs 4 and 8 show a surprising feature of closed, multi-loop orbits. Half of their innermost loops can be well fit by a p-ellipse with nearly the same initial conditions, and these segments could support the bar. However, their outermost loops are generally much more circular, and in isolation would look like segments of a disc orbit unrelated to the bar. In a given power-law potential (specified by the values of δ and δb) each approximate, resonant orbit requires a specific value of the asymmetric amplitude eb to satisfy the equation of motion constraints. For an ensemble of such orbits, making up a model bar, a specific radial variation of eb is required. It is unlikely that the needed pattern of eb would obtain for arbitrary initial disc structures. However, gas clouds will prefer more or less concentric loop orbits. Dissipative processes might drive bar parameters (e.g. a combination of external and internal contributions to the asymmetric potential) to values that support the required variation. Note that the constraint equations are such that the required values of eb do not depend on the pattern speed. The zeroth-order orbital frequency parameter, fo, does. An alternative would be radial variations of the potential profile index (δ or δb). Such variations must be small, or the perturbation equations have to be modified. An example was discussed in Section 5 and illustrated in Fig. 8. If, as in this example, the rotation curve becomes flat or slightly declining, subharmonic frequencies become dominant, and we get large, multi-loop orbits. Going beyond external asymmetric potentials, in the construction of a model self-gravitating bar the Poisson eq. can be viewed as a prescription for the density distribution of concentric loop orbits making up the bar (see Section 6.1). It also imposes strong additional constraints in a perturbation approximation. In fact, orbital solutions to the perturbation equations with the additional constraints from the Poisson equation given in Section 6.1 seem to be very rare. Evidently two-dimensional bars must be based on more complex orbits, or perhaps most bars have a three-dimensional structure. Altogether, these results suggest a number of interesting general conclusions. Perhaps the most important of these is that it is hard to make model bars from single-loop orbits alone, and thus, bars and oval distortions made primarily from gas are unlikely to form in galaxy discs. This conclusion is not too surprising since bars are observed to be primarily stellar, and an extensive literature of models shows that they generally have a wide range of orbits, including chaotic ones (e.g. Contopoulos 2002; Weinberg 2015a,b and references therein). On the other hand, the analytic and numerical explorations above provide some insights as to why this is so. These insights include the fact that resonant, loop orbits (approximated by p-ellipses) are only found in limited regions of parameter space, at least in the perturbation approximations of Section 3. In non-self-gravitating cases, these solutions tend to have large values of the asymmetric amplitude eb, suggesting the external potential has a strong asymmetric part. Technically, this violates one of the perturbation approximations, which assumed that eb ≃ e, but the good fits between numerical and analytic orbits at low to moderate values of e suggests the consequences are not serious. The bar orbits illustrated above are wide, even at quite high values of the eccentricity parameter e. This suggests that more complex orbits (e.g. like those in described in Williams & Evans 2017) are needed to support narrow bars. Loop orbit solutions in two-dimensional self-gravitating bars are at best very rare. It is not surprising that significant self-gravity would lead to more complex orbits. This may be a factor in understanding why bars are effective at quenching gas-rich discs (e.g. Khoperskov et al. 2018). Beyond this initial exploration, there are many directions to pursue using p-ellipse approximation tool for the study of orbits in galaxies aysymmetric potentials. For example, p-ellipses approximations with multiple frequencies likely converge much more quickly than conventional Taylor expansions in cos(ϕ). Specifically, the case of librating p-ellipse orbits with an additional subharmonic frequency will be described in a later paper. ACKNOWLEDGEMENTS I am very grateful for the insights gained from a correspondence over the last decade on orbits in galaxies with the late Donald Lynden-Bell. I acknowledge the use of NASA’s Astrophysics Data System. REFERENCES Athanassoula E., 1992, MNRAS , 259, 328 https://doi.org/10.1093/mnras/259.2.328 CrossRef Search ADS   Athanassoula E., 2013, in Falcón-Barroso J., Knapen J. H., eds, Secular Evolution in Galaxies , Cambridge University Press, Cambridge, p. 305 Google Scholar CrossRef Search ADS   Barnes J. E., Hernquist L. E., 1991, ApJ , 370, L65 https://doi.org/10.1086/185978 CrossRef Search ADS   Berentzen I., Athanassoula E., Heller C. H., Fricke K. J., 2004, MNRAS , 347, 220 https://doi.org/10.1111/j.1365-2966.2004.07198.x CrossRef Search ADS   Bertin G., 2000, Dynamics of Galaxies . Cambridge Univ. Press, Cambridge Binney J., Tremaine S., 2008, Galactic Dynamics . Princeton Univ. Press, Princeton, NJ Christodoulou D. M., Kazanas D., 2017, pre-print (arXiv:1707:04937) Contopoulos G., 2002, Order and Chaos in Dynamical Astronomy . Springer, New York Google Scholar CrossRef Search ADS   Contopoulos G., Grosbol P., 1989, A&ApRv , 1, 261 Contopoulos G., Mertzanides C., 1977, A&A , 61, 477 Ernst A., Peters T., 2014, MNRAS , 443, 2579 https://doi.org/10.1093/mnras/stu1325 CrossRef Search ADS   Fanali R., Dotti M., Fiacconi D., Haardt F., 2015, MNRAS , 454, 3641 https://doi.org/10.1093/mnras/stv2247 CrossRef Search ADS   Freeman K., 1966a, MNRAS , 133, 47 https://doi.org/10.1093/mnras/133.1.47 CrossRef Search ADS   Freeman K., 1966b, MNRAS , 134, 1 https://doi.org/10.1093/mnras/134.1.1 CrossRef Search ADS   Freeman K., 1966c, MNRAS , 134, 15 https://doi.org/10.1093/mnras/134.1.15 CrossRef Search ADS   Gajda G., Łokas E. L., Athanassoula E., 2016, ApJ , 830, 108 https://doi.org/10.3847/0004-637X/830/2/108 CrossRef Search ADS   Gerin M., Combes F., Athanassoula E., 1990, A&A , 230, 37 Jung C., Zotos E. E., 2015, PASA , 32, e042 CrossRef Search ADS   Khoperskov S., Haywood M., Di Matteo P., Lehnert M. D., Combes F., 2018, A&A , 609, A60 Lynden-Bell D., 1979, MNRAS , 187, 101 https://doi.org/10.1093/mnras/187.1.101 CrossRef Search ADS   Lynden-Bell D., 1996, in Barred Galaxies and Circumnuclear Activity, Lecture Notes in Physics, 474 , eds., Aa Sandquist P. O. Lindblad. Springer, New York, p. 7 Google Scholar CrossRef Search ADS   Lynden-Bell D., 2010, MNRAS , 402, 1937 https://doi.org/10.1111/j.1365-2966.2009.16019.x CrossRef Search ADS   Manos T., Machado R. E. G., 2014, MNRAS , 438, 2201 https://doi.org/10.1093/mnras/stt2355 CrossRef Search ADS   Mayer L., Wadsley J., 2004, MNRAS , 347, 277 https://doi.org/10.1111/j.1365-2966.2004.07202.x CrossRef Search ADS   Miwa T., Noguchi M., 1998, ApJ , 499, 149 https://doi.org/10.1086/305611 CrossRef Search ADS   Moetazedian R., Polyachenko E. V., Berczik P., Just A., 2017, A&A , 604, A75 Noguchi M., 1987, MNRAS , 228, 635 https://doi.org/10.1093/mnras/228.3.635 CrossRef Search ADS   Noguchi M., 1988, A&A , 203, 259 Renaud F. et al.  , 2015, MNRAS , 454, 3299 https://doi.org/10.1093/mnras/stv2223 CrossRef Search ADS   Sellwood J. A., 2014, Rev. Mod. Phys. , 86, 1 https://doi.org/10.1103/RevModPhys.86.1 CrossRef Search ADS   Sellwood J. A., Wilkinson A., 1993, Rep. Prog. Phys. , 56, 173 https://doi.org/10.1088/0034-4885/56/2/001 CrossRef Search ADS   Spinoso D., Bonoli S., Dotti M., Mayer L., Madau P., Bellovary J., 2017, MNRAS , 465, 3729 https://doi.org/10.1093/mnras/stw2934 CrossRef Search ADS   Struck C., 2006, AJ , 131, 1347 https://doi.org/10.1086/500196 CrossRef Search ADS   Struck C., 2015a, MNRAS , 446, 3139 https://doi.org/10.1093/mnras/stu2342 CrossRef Search ADS   Struck C., 2015b, MNRAS , 450, 2217 https://doi.org/10.1093/mnras/stv830 CrossRef Search ADS   Valluri S. R., Wiegert P. A. Drozd J., Da Silva M., 2005, MNRAS , 427, 2392 https://doi.org/10.1111/j.1365-2966.2012.22071.x CrossRef Search ADS   Valluri M., Shen J., Abbott C., Debattista V. P., 2015, ApJ , 818, 141 https://doi.org/10.3847/0004-637X/818/2/141 CrossRef Search ADS   Weinberg M. D., 2015a, preprint (arXiv:1508.06855) Weinberg M. D., 2015b, preprint (arXiv:1508.05959) Williams A. A., Evans N. W., 2017, MNRAS , 469, 4414 https://doi.org/10.1093/mnras/stx1198 CrossRef Search ADS   Zana T., Dotti M., Capelo P. R., Bonoli S., Haardt F., Mayer L., Spinoso D., 2018, MNRAS , 473, 2608 © 2018 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Monthly Notices of the Royal Astronomical Society Oxford University Press # Orbits in elementary, power-law galaxy bars – 1. Occurrence and role of single loops , Volume 476 (2) – May 1, 2018 14 pages /lp/ou_press/orbits-in-elementary-power-law-galaxy-bars-1-occurrence-and-role-of-SXpwFI7ACH Publisher The Royal Astronomical Society ISSN 0035-8711 eISSN 1365-2966 D.O.I. 10.1093/mnras/sty405 Publisher site See Article on Publisher Site ### Abstract Abstract Orbits in galaxy bars are generally complex, but simple closed loop orbits play an important role in our conceptual understanding of bars. Such orbits are found in some well-studied potentials, provide a simple model of the bar in themselves, and may generate complex orbit families. The precessing, power ellipse (p-ellipse) orbit approximation provides accurate analytic orbit fits in symmetric galaxy potentials. It remains useful for finding and fitting simple loop orbits in the frame of a rotating bar with bar-like and symmetric power-law potentials. Second-order perturbation theory yields two or fewer simple loop solutions in these potentials. Numerical integrations in the parameter space neighbourhood of perturbation solutions reveal zero or one actual loops in a range of such potentials with rising rotation curves. These loops are embedded in a small parameter region of similar, but librating orbits, which have a subharmonic frequency superimposed on the basic loop. These loops and their librating companions support annular bars. Solid bars can be produced in more complex potentials, as shown by an example with power-law indices varying with radius. The power-law potentials can be viewed as the elementary constituents of more complex potentials. Numerical integrations also reveal interesting classes of orbits with multiple loops. In two-dimensional, self-gravitating bars, with power-law potentials, single-loop orbits are very rare. This result suggests that gas bars or oval distortions are unlikely to be long-lived, and that complex orbits or three-dimensional structure must support self-gravitating stellar bars. galaxies: kinematics and dynamics 1 INTRODUCTION The idea that galaxy bars are built on a skeleton of simple closed orbits, elongated along the bar (i.e. the x1 family), and fleshed out by similar orbits with constrained librations (Lynden-Bell 1979), is conceptually simple, and popular. Athanassoula (2013) states, ‘The bar can then be considered as a superposition of such orbits,..., which will thus be the backbone of the bar’. Images of nested sets of such orbits derived from numerical models reinforce that idea (see Athanassoula 1992; Binney & Tremaine 2008). Nonetheless, there are very few analytic models of stellar bars and oval distortions in galaxies, composed of simple, nested orbits (see Contopoulos 2002; Binney & Tremaine 2008). This is unfortunate since such models could facilitate the study of bars, and advance our understanding of them. Furthermore, much of the study of orbits in numerical simulations has focused on the special cases of fixed potentials with bars of Ferrers (e.g. Athanassoula 1992) or Freeman type (Freeman 1966a,b,c). Williams & Evans (2017) point out that, because of their homogeneous density profiles, these models are not especially realistic. These latter authors study a family of very different models where the bar components are represented by thin, dense needles. Indeed, they opine that ‘models of bars... remain rather primitive’ and ‘there is ample scope for the development of new models...’. In their models of weak bars, Williams & Evans (2017) do find that, as in the classic picture, simple, loop (type x1, x4) dominate. However, their models of strong bars are dominated by much more complex ‘propeller’ orbits. Families of complex and chaotic orbits are found in many numerical simulations with either fixed or self-consistent potentials (e.g. Sellwood & Wilkinson 1993; Ernst & Peters 2014; Manos & Machado 2014; Jung & Zotos 2015; Valluri et al. 2015; Gajda, Łokas & Athanassoula 2016), and those families are likely to be just as important a constituent of bars as simple loop orbits. Thus, the question arises of whether the classic picture of bars as nested loop orbits has any great relevance beyond special cases or illustrative toy models? On the other hand, gas clouds in bars cannot pursue complex orbits without generating shocks and strong dissipation. Gas may be quickly expelled from strong bars dominated by complex orbits, but may play an important role in weak or forming bars. Thus, beyond generalizing the Williams & Evans (2017) models, it would be useful to know when, and in what potentials simple, nested, loop orbits can dominate the bar. Galaxies, or even limited radial regions in galaxy discs, have a wide range of potential forms. It would be useful to find relationships between the structure of potentials (symmetric and asymmetric) and the orbit types, especially simple orbit types, that they support. This is another area where our knowledge ‘remain(s) rather primitive’. In this paper I will undertake a modest exploration of this territory by studying the simplest orbits in simple power-law potentials. More realistic potentials may be decomposed into sums of power-law potential approximations, and we may expect that individual terms in these sums will bring their corresponding orbits into regions where they dominate. There are several ways to study closed orbits in bars (see Bertin 2000; Binney & Tremaine 2008). Perhaps, the most direct method is to seek them in numerical models (e.g, Contopoulos & Grosbol 1989; Athanassoula 1992; Miwa & Noguchi 1998). A second method leverages action-angle variables in a perturbation formalism, which in limiting cases fits well with the epicyclic orbit approximation (e.g. Lynden-Bell 1979; Sellwood & Wilkinson 1993; Binney & Tremaine 2008; Sellwood 2014). In this paper we will use a related method, analytic (p-ellipse) orbit approximations in a perturbation approach. In Struck (2006) it was shown that a precessing power-law ellipse (p-ellipse) approximation is quite accurate up to moderate eccentricities in a wide range of power-law potentials. There are other good approximations available, e.g. the Lambert W function discussed in (Valluri et al. 2012), but p-ellipses are especially simple. In a later work (Struck 2015a) it was found that with simple modifications, i.e. to the precession frequencies, p-ellipse approximations can also approximate high-eccentricity orbits remarkably well. Because of this frequency modification there is a continuum of Lindblad resonances parametrized by eccentricity for highly eccentric orbits. Ensembles of eccentric resonant orbits of different sizes, excited impulsively, could have equal precession periods and make up the backbone of kinematic bars or spiral arms with constant pattern speeds in symmetric halo potentials (2015b). This idea motivated the work below, but we will see that in many power-law potentials with a bar component, nearly radial, single-loop orbits in the bar frame either do not exist or are very small. The (Struck 2015b) paper did not address the question of whether these or other simple closed orbits also exist in potentials with a fixed non-axisymmetric component, e.g. due to a prolonged tidal component or an oval or bar-like halo. To use approximate p-ellipse orbits to investigate this, it must first be shown when, or under what conditions, p-ellipses can approximate orbits in non-axisymmetric gravitational potentials. It will be demonstrated below that in the case of simple, closed loop orbits the answer is the same as in the case of symmetric potentials – the p-ellipse approximation is again quite accurate up to moderate eccentricities in a wide range of potentials (Section 4). It will also be shown by example that in the immediate neighbourhood of resonant loop orbits, there exist other orbits that are very similar, but modestly librating (Section 4). On average, these orbits can also be described by p-ellipses, and ultimately may be more completely approximated by p-ellipse with added frequencies to represent the libration (see Section 3.2). The parameter space near the simple resonant loop is evidently densely populated with closed versions of these librating orbits, and they might be used to form the backbone of a model bar (Section 5). This suggestion is much as proposed by Lynden-Bell (1979). see also Contopoulos & Mertzanides (1977), and Lynden-Bell (1996) for discussions of resonant orbits and bar formation. (Lynden-Bell 1979, in an appendix, also describes a wider range of orbits that would fit into his formalism.) We will see in Sections 4 and 5 that many potentials with symmetric and barred components represented by single power laws do not have more than one closed (m = 2) loop orbit. Even when accompanied by their librating family of nearby orbits, we would only expect hollow, annular stellar bars to exist in these cases. A potential consisting of multiple power-law parts, each dominating in successive annular ranges, can produce a nested series of closed orbits, and their librating companions. This can make a more robust bar (see Section 5 and Fig. 8). The excitation of resonant orbits by tidal disturbances or asymmetric haloes may generate self-gravitating bars or waves (Noguchi 1987, 1988; Barnes & Hernquist 1991). It is not clear, however, that as the bars grow to non-linearity, and acquire significant self-gravity, whether the simple, closed orbits will continue to exist, or whether they can be arranged to form a stable, self-gravitating bar, i.e. whether the Poisson equations, as well as the equations of motion, can be approximately solved by ensembles of simple, loop orbits and modestly librating orbits. In fact, we will see in Section 6 that the simple planar, p-ellipse approximation at second order has very few solutions with the additional Poisson constraints. As discussed in the final two sections, these results imply that long-lived, self-consistent bars or oval distortions cannot have a substantial gas component, because there would be strong dissipation. When such bars are made of stars, essentially all orbits must librate, or have complex multi-loop forms, as seen in published simulations. 2 BASIC EQUATIONS AND P-ELLIPSE APPROXIMATIONS 2.1 Basic equations In this work we only consider orbits in the two-dimensional central plane of a galaxy disc, and generally adopt a symmetric, power-law, halo potential of the form,   \begin{eqnarray} \Phi = \frac{-GM_{\epsilon }}{2{\delta }\epsilon } \left(\frac{\epsilon }{r} \right) ^{2\delta }. \end{eqnarray} (1)In addition, we will include a non-axisymmetric (bar) part of the potential of the simple form,   \begin{eqnarray} \Phi _b = \frac{-GM_{\epsilon }}{\epsilon } \left(\frac{\epsilon }{r} \right) ^{2\delta _b} {e_b} \text{cos} \left( 2(\phi - \phi _o) + \Omega _b t \right), \end{eqnarray} (2)where r and ϕ are the radial and azimuthal coordinates, respectively, in the disc, eb is an amplitude parameter of the asymmetric potential, δ and δb give the radial dependence of the symmetric and non-axisymmetric potentials, and Ωb is the rotation frequency of the latter. The scale length is ε and Mε is the halo mass contained between the radius r = ε and some minimum radius. The above is a very simple form for a bar potential, with relatively few parameters, and no characteristic length (e.g. cut-off radius). Then the equations of motion for stars orbiting in the disc with the adopted potentials are   \begin{eqnarray*} \ddot{r} &=& \frac{-GM_{\epsilon }}{\epsilon ^2} \left( \frac{\epsilon }{r} \right) ^{1+2\delta } - 2{\delta _b}e_b \frac{GM_{\epsilon }}{\epsilon ^2} \left( \frac{\epsilon }{r} \right) ^{1+2\delta _b}\nonumber\\ &&\times \cos {\left( 2(\phi - \phi _o) + \Omega _b t \right)} + r \dot{\phi }^2, \end{eqnarray*}   \begin{eqnarray} \ddot{\phi } + \frac{2\dot{r}\dot{\phi }}{r} &=& -\frac{1}{r^2} \frac{\mathrm{\partial} \Phi }{\mathrm{\partial} \phi } = -2e_b \frac{GM_{\epsilon }\Omega _b}{\epsilon ^3} \left( \frac{\epsilon }{r} \right)^{2+2\delta _b}\nonumber\\ &&\times \sin {\left( 2(\phi - \phi _o) + \Omega _b t \right)}. \end{eqnarray} (3) Next, we derive dimensionless forms of these equations by substituting the dimensionless (overbar) variables and dimensionless constants defined as   \begin{eqnarray} \bar{r} = r/\epsilon , \ \bar{t} = t/\tau , \ c = \frac{GM_\epsilon \tau ^2}{ \epsilon ^3}, \ c_b = c e_b. \end{eqnarray} (4)For additional simplification we will set the value of the time-scale to $$\tau ^{-2} = \frac{GM_\epsilon }{\epsilon ^3}$$, so that c = 1.0. Despite this choice, we will carry the c factor through much of the analysis below for clarity. Then the dimensionless equations of motion are   \begin{eqnarray*} \ddot{\bar{r}} = -c \bar{r}^{-\left( {1+2\delta }\right)} - 2{\delta _b}c_b \bar{r}^{-\left( {1+2\delta _b} \right)} \cos {\left( 2(\bar{\phi } - \phi _o) + \Omega _b t \right)} + \bar{r} \dot{\bar{\phi }}^2, \end{eqnarray*}   \begin{eqnarray} \ddot{\bar{\phi }} + \frac{2\dot{\bar{r}}\dot{\bar{\phi }}}{\bar{r}} = -2c_b \Omega _b \bar{r}^{-\left( {2+2\delta _b} \right)} \sin {(\left( 2(\bar{\phi } - \phi _o) + \Omega _b t \right)}. \end{eqnarray} (5)Henceforth we will omit the overbars and assume all variables are dimensionless. We will also assume that the initial value of the azimuth (ϕo) is zero. The next step towards a more workable set of equations is to go into a reference frame rotating with the bar or pattern speed, Ωb. In this frame the dimensionless radii are the same, and in terms of the previous values, the azimuthal coordinates are ϕ΄ = ϕ − Ωbt. We will henceforth drop the primed notation, so that the equations of motion in the rotating frame are   \begin{eqnarray*} \ddot{r} = -c r^{-\left( {1+2\delta }\right)} - 2{\delta _b}c_b r^{-\left( {1+2\delta _b} \right)} \cos {(2\phi )} + r \left( \dot{\phi } + \Omega _b \right)^2, \end{eqnarray*}   \begin{eqnarray} \ddot{\phi } + \frac{2\dot{r} \left( \dot{\phi } + \Omega _b \right)}{r} = -2c_b r^{-\left( {2+2\delta _b} \right)} \sin (2\phi ) \end{eqnarray} (6)(see e.g. Binney & Tremaine 2008, equations 3.135a,b). 2.2 p-ellipse approximations As described in the Introduction section, we seek approximate solutions of these equations, of the form,   \begin{eqnarray} \frac{1}{r} = \frac{1}{p} \left[ 1 + e \cos \left( m{\phi } \right) \right]^{\frac{1}{2} + \delta }, \end{eqnarray} (7)which were studied in Struck (2006, Paper 1), named precessing, power-law ellipses, or ‘p-ellipses’, and found to be quite accurate despite their simplicity (for other approximations, see Valluri et al. 2012). Here the orbital scale is given by the semi-latus rectum p, m is a frequency ratio, and e is the eccentricity parameter. Note that while the form of equation (7) is that same as in Struck (2006), and subsequent p-ellipse papers, the physical meaning of the m parameter is different in the rotating coordinate system, though still a function of the ratio of precession and orbital frequencies. In the following we will focus on the case where this solution is in resonance with the bar driving force, i.e. with m = 2. If such solutions can provide accurate approximations, as in the case of symmetric potentials, then they demonstrate continuity with orbits of the purely symmetric part of the potential (since parameters from the bar potential are not included). They might also provide a useful tool for studying orbit transformation in the process of bar formation. However, it is not a priori clear how well the p-ellipse approximation will work for orbits that change their angular momenta over orbital segments (conserving it only over the whole period in the case of closed resonant orbits). Generally, equation (7) only yields closed or open-precessing loop forms except at high eccentricity. As detailed in Struck (2015a), in the case of nearly radial orbits, the addition of a harmonic term (in cos(2mϕ)) to equation (7) significantly improves the accuracy of the orbit approximation. We will not pursue this refinement in this paper, and to keep the algebra manageable, will generally neglect harmonic terms in the perturbation analyses below. However, it has been clear since the early work of Lynden-Bell (1979) that classes of orbits in bars can be described as liberating ovals. Analytic approximation of these forms requires more than a single frequency. Thus, we will explore the equations with a second frequency term (m) to get an approximate solution of the form,   \begin{eqnarray} \frac{1}{r} &=& \frac{1}{p} \left[ 1 + e \cos \left( m{\phi } \right) + c_2 e \cos \left( 2{\phi } \right) + c_x e^2 \cos \left( (2-m){\phi } \right) \right]^{\frac{1}{2} + \delta }\!\!\!\!\!\!\!\!\!,\nonumber\\ \end{eqnarray} (8)The new frequency is m (here redefined and ≠ 2), and the final term in square brackets must be included since such factors will be generated by cross terms in the equations of motion, so the solution must contain terms to balance them. The value of the frequency m may be close to 2. In such cases, the frequency 2 − m will generally have a small value, and can approximate a subharmonic of the driving frequency. This can generate liberating, near resonant loop approximations to numerical orbits, as well as more complex forms. 3 PERTURBATION ANALYSES In this section we develop the p-ellipse approximations to the orbits satisfying equations (6), and derive the corresponding relations between the orbital parameters. Assuming that the orbits are not radial, so the eccentricity e is a relatively small parameter, we can expand in that parameter. While the radial equation above has zeroth-order terms, all the terms in the azimuthal equation are of first order and higher. For reasonable accuracy up to moderate eccentricities, we carry out the expansion to second order. We consider the two approximate solutions given by equations (7) and (8) separately in the following two subsections. Readers not interested in the details of these calculations may proceed to Section 4. 3.1 Single-frequency case In this subsection we consider the perturbation expansion of the resonant solution equation (7). The second-order approximations to that p-ellipse solution and its first two time derivatives are   \begin{eqnarray} &&{\frac{r}{p} \simeq \ 1 - \left(\frac{1}{2}+\delta \right) e\ \text{cos}(2\phi)} \nonumber\\ &&\quad \quad \quad + \,\frac{1}{2} \left(\frac{1}{2}+\delta \right) \left(\frac{3}{2}+\delta \right) e^2 \text{cos}^2(2\phi ), \end{eqnarray} (9)  \begin{eqnarray} \frac{1}{p} \frac{\text{d}r}{\text{d}t} &=& \frac{\dot{r}}{p} \simeq\nonumber\\ &&\left[ \left(\frac{1}{2} + \delta \right) 2e\ \text{sin}(2\phi ) \left[ 1 - \left(\frac{3}{2}+\delta \right) e\ \text{cos}(2\phi ) \right] \right] \dot{\phi },\nonumber\\ \end{eqnarray} (10)  \begin{eqnarray} \frac{\ddot{r}}{p} &\simeq& \left\lbrace \left(\frac{1}{2} + \delta \right) \left(\frac{3}{2} + \delta \right) 4 e^2 + \left(\frac{1}{2} + \delta \right) 4 e \text{cos}(2\phi ) \right.\nonumber\\ &&\left. -\, \left(\frac{1}{2} + \delta \right) \left(3 + \delta \right) 4 e^2 \text{cos}^2(2\phi ) \right\rbrace \dot{\phi }^2 \nonumber\\ &&+\, \left(\frac{1}{2} + \delta \right) 2 e\ \text{sin}(2\phi ) \ddot{\phi }. \end{eqnarray} (11)In the last of these equations, an additional term in $$e^2 \ddot{\phi }$$ was dropped on the assumption that $$\ddot{\phi }$$ is itself of first or higher order. This assumption will be confirmed below. This equation still contains one term in $$\ddot{\phi }$$. Substituting the expressions above for r and $$\dot{r}$$ into the second of equations (6), we obtain a first-order approximation for $$\ddot{\phi }$$,   \begin{eqnarray} \ddot{\phi } \simeq -4 \left(\frac{1}{2} + \delta \right) \left( \dot{\phi } + \Omega _b \right) \dot{\phi } e\ \text{sin}(2\phi ) - \frac{2c_b}{p^{2(1+\delta _b)}} \text{sin}(2\phi ), \end{eqnarray} (12)where we assume that cb is comparable to or less than e. This equation can then be substituted into equation (11), and the resulting form substituted for in the first of equations (6). The following approximations of the power-law terms can also be substituted.   \begin{eqnarray} \left(\frac{p}{r} \right)^{1+2\delta } \simeq 1 &+& 2 \left(\frac{1}{2} + \delta \right)^2 e\ \text{cos}(2\phi )\nonumber\\ &+& \left(\frac{1}{2} + \delta \right)^2 \left[ 2\left(\frac{1}{2} + \delta \right)^2 - 1 \right] \ e^2\ \text{cos}^2(2\phi ),\nonumber \\ \end{eqnarray} (13)  \begin{eqnarray} &&{\left( \frac{p}{r} \right)^{2(1+\delta _b)} \simeq 1 + 2 \left(\frac{1}{2} + \delta \right) \left(\frac{1}{2} + \delta _b \right) e\ \text{cos}(2\phi )}\nonumber\\ &&+ \,\left(\frac{1}{2} + \delta \right) \left(\frac{1}{2} + \delta _b \right) \left[ 2\left(\frac{1}{2} + \delta \right) \left(\frac{1}{2} + \delta _b \right) - 1 \right]\nonumber\\ && \times \,\ e^2\ \text{cos}^2(2\phi), \end{eqnarray} (14) After substituting equations (9)–(14), the radial equation in equation (6) yields a second-order expression for $$\dot{\phi }^2$$, which is equivalent to that obtained from angular momentum conservation in symmetric potentials. We can generally approximate this variable, like the radius, in powers of e cos(2ϕ),   \begin{eqnarray} \dot{\phi } \simeq f_o + f_1 e \text{cos}(2\phi ) + f_2 e^2 \text{cos}^2 (2\phi ), \end{eqnarray} (15)where fi are constant coefficients. With this final substitution, the radial equation yields a constraint equation at each order of ecos(2ϕ). The equation derived from the constant terms is   \begin{eqnarray} &&{4 \left( \frac{1}{4} - \delta ^2 \right) e^2 f_o^2 -8 \left( \frac{1}{2} + \delta \right) ^2 e^2 \Omega _b f_o} \nonumber\\ &-& 4 e e_b \left( \frac{1}{2} + \delta \right) q_b = -q + \left( f_o + \Omega _b \right)^2. \end{eqnarray} (16)where we use the following, simplifying change of variables,   \begin{eqnarray} q = c p^{-2(1+\delta )}, \ \ q_b = c p^{-2(1+\delta _b)}. \end{eqnarray} (17)Equation (16) can be viewed as a quadratic in the coefficient fo. Then the equation derived from first-order terms, i.e. terms in ecos(2ϕ), can be solved for the coefficient f1. It is,   \begin{eqnarray} 2 \left( f_o + \Omega _b \right) f_1 &=& 4 \left( \frac{1}{2} + \delta \right) f_o^2 + 2 \left( \frac{1}{2} + \delta \right)^2 q\nonumber\\ &&+\, 2 \delta _b \frac{e_b}{e} q_b + \left( \frac{1}{2} + \delta \right) \left( f_o + \Omega _b \right)^2. \end{eqnarray} (18)The equation in terms in e2cos2(2ϕ) can be solved for the coefficient f2,   \begin{eqnarray} 2 \frac{\left( f_o + \Omega _b \right)}{\left( \frac{1}{2} + \delta \right)} f_2 &=& 8f_o^2 + 10f_o f_1 + 8\left( \frac{1}{2} + \delta \right) f_o \Omega _b\nonumber\\ &&+\, 2f_1 \Omega _b - \frac{1}{2} \left( \frac{3}{2} + \delta \right) \left( f_o + \Omega _b \right)^2 - \frac{f_1^2}{\left( \frac{1}{2} + \delta \right)} \nonumber\\ && +\, \left( \frac{1}{2} + \delta \right) \left[ 2 \left( \frac{1}{2} + \delta \right)^2 - 1 \right] q\nonumber\\ && +\, 4 \left[ 1 + \frac{1}{2}\delta _b + \delta _b^2 \right] \frac{e_b}{e} q_b. \end{eqnarray} (19)This completes the set of three equations for the fi coefficients. However, we still need to use the azimuthal equation (6) to solve for the p-ellipse variables, p and e. To begin, equation (15) can be differentiated to obtain an expression for the second derivative, $$\ddot{\phi }$$, which is   \begin{eqnarray} \ddot{\phi } \simeq -2 f_o f_1 e\ \text{sin}(2\phi ) -2 \left( f_1^2 + 2 f_o f_2 \right) e^2 \text{sin}(2\phi ) \text{cos}(2\phi ). \end{eqnarray} (20)This equation and equations (9), (10), (14), and (15) can then be substituted into the azimuthal equation of motion (6) to obtain the perturbation constraint. In this azimuthal equation we retain only terms of first and second orders in e, and after cancellation of a common factor of e sin(eϕ), these appear as terms of zeroth and first orders. This is confusing since the radial equation has true zeroth-order terms giving the balance of gravitational and centrifugal forces when e, eb = 0. Thus, we will continue to refer to these azimuthal equation terms as the first- and second-order conditions. The first-order condition reduces to the following,   \begin{eqnarray} f_o f_1 = 2 \left(\frac{1}{2} + \delta \right) f_o \left( f_o + \Omega _b \right) + \frac{e_b}{e} q_b. \end{eqnarray} (21)The second-order equation (ecos(2ϕ) terms in the azimuthal equation) is   \begin{eqnarray} f_1^2 + 2f_o f_2 &=& 2 \left( \frac{1}{2} + \delta \right)\nonumber\\ &&\times \left[ -\left( f_o + \Omega _b \right) f_o \left( 2f_o + \Omega _b \right) f_1 \right. \nonumber \\ && + \left. \left( 1 + \delta _b \right) \frac{e_b}{e} q_b \right]. \end{eqnarray} (22)This completes the set of coefficient equations derived from the equations of motion in this resonant case. The equations are linear in the variables q, qbeb/e, and of quadratic order or less in the fi factors. Thus, to second order in any disc region with fixed values of δ, δb, there are generally zero to two resonant p-ellipse orbits. This important result is evidently due to the fact that the ratio of epicyclic/precession frequency to circular orbit frequency is a constant in power-law potentials. We will consider specific solutions in the following section. 3.2 Two-frequency case In this subsection we consider second-order solutions to the equations of motion (equations (6)) of the form of equation (8). The perturbation expansion procedure is essentially the same as that of the previous subsection. For brevity, we will not give the equations analogous to equations (9)–(14) above. In this case the angular velocity expansion form, analogous to equation (15), is   \begin{eqnarray} \dot{\phi } &\simeq& f_o + f_1 e \text{cos}(m\phi ) + f_2 e \text{cos}(2\phi )\nonumber\\ &&+\, f_3 e^2 \text{cos}^2 (m\phi ) + f_4 e^2 \text{cos}^2 (2\phi ) + f_5 e^2 \text{cos}^2 ((2-m)\phi ). \end{eqnarray} (23) Then, the coefficient equations deriving from the radial equation are analogous to equations (16)–(19), except that there are now six of them. The first is the equation derived from the constant terms,   \begin{eqnarray} &&{\left( \frac{1}{2} + \delta \right) \left( \frac{3}{2} + \delta \right) \left( m^2 + 4c_2^2 \right) e^2 f_o^2 - 4 \left( \frac{1}{2} + \delta \right) c_2 e e_b q_b}\nonumber\\ &-&2 \left( \frac{1}{2} + \delta \right)^2 f_o \left( \Omega _b + f_o \right) \left( m^2 + 4c_2^2 \right) e^2 = -q + \left( f_o + \Omega _b \right)^2, \nonumber\\ \end{eqnarray} (24)The equation from the e cos(mϕ) terms is   \begin{eqnarray} 2 \left( f_o + \Omega _b \right) f_1 &=& \left( \frac{1}{2} + \delta \right) m^2 f_o^2 + 2 \left( \frac{1}{2} + \delta \right)^2 q\nonumber\\ &&+ \,2 \delta _b \frac{e_b}{e} q_b + \left( \frac{1}{2} + \delta \right) \left( f_o + \Omega _b \right)^2. \end{eqnarray} (25)The equation from the e cos(2ϕ) terms is   \begin{eqnarray} 2 \left( f_o + \Omega _b \right) f_2 &=& 4 \left( \frac{1}{2} + \delta \right) c_2 f_o^2 + 2 \left( \frac{1}{2} + \delta \right)^2 c_2 q\nonumber\\ && +\, 2 \delta _b \frac{e_b}{e} q_b + \left( \frac{1}{2} + \delta \right) c_2 \left( f_o + \Omega _b \right)^2. \end{eqnarray} (26)The first second-order equation from the e2 cos2(mϕ) terms is   \begin{eqnarray} 2 \left( f_o + \Omega _b \right) f_3 &=& 2 \left( \frac{1}{2} + \delta \right) m^2 f_o f_1 \nonumber\\ &&-\,2 \left( \frac{1}{2} + \delta \right) \left( \frac{3}{2} + \delta \right)m^2 f_o^2 + 2 \left( \frac{1}{2} + \delta \right)^2\nonumber\\ &&\times\, m^2 f_o \left( f_o + \Omega _b \right)+ \left( \frac{1}{2} + \delta \right)^2\nonumber\\ &&\times\, \left[ 2 \left( \frac{1}{2} + \delta \right)^2 - 1 \right] q - f_1^2\nonumber\\ &&-\, \frac{1}{2} \left( \frac{1}{2} + \delta \right) \left( \frac{3}{2} + \delta \right)\nonumber\\ &&\left( f_o + \Omega _b \right)^2 + 2 \left( \frac{1}{2} + \delta \right) f_1 \left( f_o + \Omega _b \right). \end{eqnarray} (27)The equation from the e2 cos2(2ϕ) terms is   \begin{eqnarray} \frac{2}{c_2} \left( f_o + \Omega _b \right) f_4 &=& \left( \frac{1}{2} + \delta \right) f_2 \left( 9 f_o + \Omega _b \right)\nonumber\\ &&-\, 8 \left( \frac{1}{2} + \delta \right) \left( \frac{3}{2} + \delta \right) {c_2} {f_o}^2\nonumber\\ &&+\, 8 \left( \frac{1}{2} + \delta \right)^2 {c_2} f_o \left( f_o + \Omega _b \right)\nonumber\\ &&+\, 4 \left( \frac{1}{2} + \delta \right) \left( \frac{3}{2} + \delta \right) \frac{e_b}{e} q_b\nonumber\\ &&+\, \left( \frac{1}{2} + \delta \right)^2 \left[ 2 \left( \frac{1}{2} + \delta \right)^2 - 1 \right]{c_2} q\nonumber\\ &&-\, \frac{1}{2} \left( \frac{1}{2} + \delta \right) \left( \frac{3}{2} + \delta \right) {c_2} \left( f_o + \Omega _b \right)^2. \end{eqnarray} (28)And the equation from the e2 cos2((2 − m)ϕ) terms is   \begin{eqnarray} &&{2 \left( f_o + \Omega _b \right) f_5}\nonumber\\ &=&- \left( \frac{1}{2} + \delta \right) \left[ \left( \frac{3}{2} + \delta \right) \frac{c_2}{2} - c_x \right] \left( f_o^2 + \left(f_o + \Omega _b \right)^2 \right)\nonumber\\ &&-\, \frac{m}{2} \left( \frac{1}{2} + \delta \right) \left[ 8 \left( \frac{1}{2} + \delta \right) c_2 f_o \left( f_o + \Omega _b \right) + 2 \frac{e_b}{e} q_b \right]\nonumber\\ &&+\, \frac{1}{2} \left( \frac{1}{2} + \delta \right)^2 \left[ \frac{c_2}{2} \left( \left( \frac{1}{2} + \delta \right)^2 - 1 \right) + c_x \right] q\nonumber\\ &&+\, \frac{\delta _b}{2} \left( \frac{1}{2} + \delta \right) \left( \frac{1}{2} + \delta _b \right) \frac{e_b}{e} q_b\nonumber\\ &&+\, \left( \frac{1}{2} + \delta \right) \left( f_2 + c_2 f_1 \right) \left( f_o + \Omega _b \right) + f_1 f_2\!\!\!\!\!\!\!\! . \end{eqnarray} (29)As in the previous case, most of these equations can be used to obtain values of the fi coefficients in equation (23). To proceed, we differentiate the quantity $$\dot{\phi }^2$$, derived from equation (23) to get,   \begin{eqnarray} &-&\ddot{\phi } \simeq f_o \left[ mf_1 e\ \text{sin}(m\phi ) + 2f_2 e\ \text{sin}(2\phi ) \right]\nonumber\\ &+& \left( f_1^2 + 2 f_o f_3 \right) m e^2 \text{sin}(m\phi ) \text{cos}(m\phi )\nonumber\\ &+& 2 \left( f_2^2 + 2 f_o f_4 \right) e^2 \text{sin}(2\phi ) \text{cos}(2\phi )\nonumber\\ &+& \frac{2-m}{2} \left( f_1 f_2 + 2 f_o f_5 \right) e^2 \text{sin}((2-m)\phi ) \text{cos}((2-m)\phi ). \end{eqnarray} (30) This equation can be used to eliminate the $$\ddot{\phi }$$ term in the angular equation of motion, as in the previous case. Then we obtain five coefficient equations by gathering like terms in this equation. The first of these is obtained from the me sin(mϕ) terms,   \begin{eqnarray} f_1 = 2 \left(\frac{1}{2} + \delta \right) \left( f_o + \Omega _b \right). \end{eqnarray} (31)The equation from the 2e sin(2ϕ) terms is   \begin{eqnarray} f_o f_2 = 2 \left(\frac{1}{2} + \delta \right) c_2 f_o \left( f_o + \Omega _b \right) + \frac{e_b}{e} q_b. \end{eqnarray} (32)The equation from the me2 sin(mϕ)cos(mϕ) terms is   \begin{eqnarray} 2f_o f_3 = 2 \left(\frac{1}{2} + \delta \right) \left( f_o + \Omega _b \right) \left( 2f_1 - f_o \right) - f_1^2. \end{eqnarray} (33)The equation from the 2e2 sin(mϕ)cos(mϕ) terms is   \begin{eqnarray} 2f_o f_4 &=& 2 \left(\frac{1}{2} + \delta \right) c_2^2 \left( f_o + \Omega _b \right) \left( 2f_2 - f_o \right) - f_2^2\nonumber\\ &&+\, \frac{c_2}{2} \left(\frac{1}{2} + \delta \right) \left(\frac{1}{2} + \delta _b \right) \frac{e_b}{e} q_b. \end{eqnarray} (34)And the equation from the e2 sin((2 − m)ϕ) terms is   \begin{eqnarray} &&{(2-m) f_o f_5 = 2 \left(\frac{1}{2} + \delta \right) \left( f_o + \Omega _b \right)}\nonumber\\ &&{\left[ -mf_2 + 2 c_2 f_1 + (2-m) \left( -\frac{c_2}{2} + c_x \right) f_o \right]}\nonumber\\ &-& \frac{2-m}{2} f_1 f_2 + c_2 \left(\frac{1}{2} + \delta \right) \left(\frac{1}{2} + \delta _b \right) \frac{e_b}{e} q_b. \end{eqnarray} (35) These five equations from the azimuthal equation, together with the six from the radial equation (equations (24)–(29)), complete the set needed to solve for the parameters fo − f5, m, p, e, c2, and cx of the approximate solution given by equations (8) and (23). 4 APPROXIMATE LOOP ORBIT SOLUTIONS In this section we explore solutions to the perturbation equations of Section 3.1 based on the simple p-ellipse of equation (7). These solutions may be parents of families of orbits in non-self-gravitating galaxy bars, which are driven by an external potential, as discussed below. The external potential may due to a prolonged tidal perturbation, or a bar-like dark halo. 4.1 A very simple special case Solutions to the single-frequency cases discussed in Section 3.1 are determined by the coefficient equations (16), (18), (19), (21), and (22). We note that the sum fo + Ωb is a common term in these equations, and in the case where fo = −Ωb the equations are simplified significantly. This is the case we consider in this subsection. We note that this case has no special physical significance. The factor fo is the mean rotation frequency of the star in the pattern frame, and there is no obvious reason for it to equal the opposite of the pattern frequency. However, this simple case suggests a simple analytic solution strategy, which can be generalized. This strategy makes use of the fact that if we assume the value of one of the unknowns (fo), then we can treat the factor (eb/e)qb as an unknown variable, even though it is actually a combination of the variables e, p, and the presumably known potential amplitude eb. We are inverting the direct problem of finding to fo to ask what value of eb is needed to get the assumed value of fo. Then, the solution is obtained via the following procedure. First, use equations (21) and (18) to eliminate the variables f1 and (eb/e)qb, respectively, from equation (19). The latter is a quadratic that gives q in terms of δ, δb, and fo. The value of q yields the value of p and qb, and the remaining solution parameters are obtained directly from the other equations. We require the solution for q to be a positive, real number. In this special case, the quadratic solutions are imaginary or negative for a large range of parameter values. Even when there is a real, positive solution (or two), other physical constraints must be satisfied, e.g. 0 ≤ e ≪ 1. A range of parameter values have been explored, and relatively few physical solutions have been found in this case. 4.2 More general closed loop orbits Fortunately, when we deviate from the special case of the previous subsection (fo = −Ωb), we find more physical solutions to the coefficient equations, i.e. closed loop orbits. We consider several examples in this subsection. Nonetheless, we will follow the same procedure for solving the coefficient equations as in the previous subsection. Specifically, we will adopt a value of the pattern speed, Ωb, and a value of the mean orbital speed of the star, fo, as some multiple of the former. In principle, we could adopt values of the bar parameters, Ωb, eb (and δb), and then solve for the solution parameters e, p, fi. However, as discussed above, the solution is easier to obtain if we assume a value of fo and derive the corresponding value of eb. In the following two subsections we consider some specific examples of physically relevant solutions. 4.2.1 Slowly rising rotation curve examples The first sequence of examples has relatively slowly rising rotation curves appropriate to the inner part of a galaxy disc in both the symmetric part of the potential (with δ = −0.3) and the asymmetric part (with δb = −0.2). We also adopt the (arbitrary) value of Ωb = 0.315, and consider a range of values of fo and the ratio nb = −Ωb/fo. For a first example we take fo = −0.3 and nb = 1.05; the solution of the coefficient equations then yields: p = 9.56, e = 0.61, and eb = 5.68. All of these solution values are relatively large, so we might not expect the perturbation approximation to be very accurate in this case. Fig. 1 compares the p-ellipse approximation with these parameters to a numerically integrated orbit with the same initial conditions in the pattern frame. That is, the initial conditions are ϕ = 0, dr/dt = 0, with r given by the p-ellipse equation and dϕ/dt given by a value like that of equation (15), with c = 1. It is apparent that the two orbits are very similar. A small subharmonic modulation (four times the fundamental period) is visible in the numerical orbit in the lower panel, which presages a trend we will see more of below. This modulation is also responsible for the finite thickness of the numerical orbit curve in Fig. 1. Figure 1. View largeDownload slide A sample orbit determined by the parameter value fo = −0.3 and in the potential specified by the values δ = −0.3, δb = −0.2 and pattern speed Ωb = 0.315, in the dimensionless units. In both panels the blue solid curve is the result of numerically integrating the equations of motion with the given initial conditions (see the text for details), and the red-dashed curve is the p-ellipse approximation. The upper panel is the view on to the disc in the pattern frame; the lower panel shows radius versus azimuthal advance, which is negative in this case. Figure 1. View largeDownload slide A sample orbit determined by the parameter value fo = −0.3 and in the potential specified by the values δ = −0.3, δb = −0.2 and pattern speed Ωb = 0.315, in the dimensionless units. In both panels the blue solid curve is the result of numerically integrating the equations of motion with the given initial conditions (see the text for details), and the red-dashed curve is the p-ellipse approximation. The upper panel is the view on to the disc in the pattern frame; the lower panel shows radius versus azimuthal advance, which is negative in this case. While the fit of the analytic to the numerical curve in Fig. 1 is impressive, there is an important caveat. In the previous paragraph, the initial angular velocity used in the numerical orbit was described as ‘like that of equation (15)’. Although the value predicted by equation (15) is $$\dot{\phi } = -0.49$$, while the value that yields the good fit is (15) is $$\dot{\phi } = -0.87$$. Thus, the analytic equation for $$\dot{\phi }$$ does not yield an accurate approximation for the values of e as large as the present example. This is understandable, since in the present case the analytic prediction is that the terms of equation (15) are (fo, ef1, 0.5e2f2) = ( − 0.3, −0.51, 0.16). Clearly, the series on the right-hand side of equation (15) is not converging rapidly with the relatively large value of e. This slow $$\dot{\phi }$$ convergence is a limitation on the p-ellipse approximation, but it is a predictable consequence of large values of e. Note that the more accurate value for the numerical orbit was found by trial and error, and so too in most of the examples below. Nonetheless, this first example shows that a p-ellipse approximation, developed for orbits in symmetric potentials, can also fit rather flattened orbits in two-part, asymmetric potentials quite well. One major difference between the adopted p-ellipse solution and those used for symmetric potentials (see Struck (2006)) is that we have changed the frequency ratio to a fixed resonant value, rather than the value representing the precession of the given symmetric potential. A second orbital example, specified by fo = −0.3029, nb = 1.04, and the same pattern frequency (Ωb = 0.315), and with derived analytic values of p = 5.0076, e = 0.9542, and eb = 3.2069 is shown in Fig. 2. In this high eccentricity case, the predicted value of $$\dot{\phi }$$ was −0.32, and the fitted value, −1.19. Although the value of fo is only slightly changed from the previous orbit, this orbit is much smaller and more elongated. In fact, physical orbit solutions can generally only be found over a small range of fo values. Figure 2. View largeDownload slide Like Fig. 1, but for the orbit determined by the parameter value fo = −0.3029, again in the δ = −0.3, δb = −0.2 potential with pattern speed Ωb = 0.315. The blue solid curve shows the numerically integrated orbit in both panels. The red-dashed curve in the upper panel is the analytic orbit. The green-dotted curve in the lower panels is the analytic curve, but corrected as described in the text. The lowest panel shows the azimuthal velocity in the pattern frame as a function of radius. Note that the azimuthal velocity is negative, and the speed is higher at small radii. Figure 2. View largeDownload slide Like Fig. 1, but for the orbit determined by the parameter value fo = −0.3029, again in the δ = −0.3, δb = −0.2 potential with pattern speed Ωb = 0.315. The blue solid curve shows the numerically integrated orbit in both panels. The red-dashed curve in the upper panel is the analytic orbit. The green-dotted curve in the lower panels is the analytic curve, but corrected as described in the text. The lowest panel shows the azimuthal velocity in the pattern frame as a function of radius. Note that the azimuthal velocity is negative, and the speed is higher at small radii. Two differences from the previous example are evident in the top panel of Fig. 2. First, the fit is not as good. This is not surprising given the high eccentricity. (Note that the flattening at a given p-ellipse eccentricity differs from that of a simple ellipse; see Struck 2006.) Secondly, the numerical orbit is thicker, a result of stronger subharmonic modulation, which is evident in the lower panel of Fig. 2. As Lynden-Bell (2010) found for elliptical orbits, and (2015a, see equation 12) confirmed for p-ellipses, a given orbit can be approximated much more accurately by using the eccentricity derived from the values of its inner and outer radii, rather than that derived as above. In the present case, once we have found a closed numerical orbit that best agrees with the p-ellipse approximation, we can use its extremal radii to get a better estimate of the eccentricity (here e = 0.825). This latter approach is used in the lower panel of Fig. 2, and the result is an excellent fit despite the high eccentricity (except for the subharmonic modulation). The difference between these two examples was a slight increase of the parameter fo in the second case. If we increase fo further (but still slightly), we find the trends described above continue. That is, the numerical orbits tend to get a little smaller and more eccentric, but the p-ellipse approximation tends to exaggerate the eccentricity by greater amounts, unless corrected as just described. If instead we decrease the (negative) value of fo from the value used in the first example, then the trends reverse. That is, we get bigger orbits, that are less eccentric, but also tend to have higher subharmonic modulation; they are thicker. This last trend continues up to the point that the (numerical) orbits change their shape altogether. A third example shows the nature of this shape change with the present potentials. This case, shown in Fig. 3, has fo = −0.2990, nb = 1.0535, and the same pattern frequency as the previous examples. The derived analytic values are p = 15.79, e = 0.4301, and eb = 8.8691. The predicted value of $$\dot{\phi }$$ was −0.4973, and the fitted value, −0.7142. In this case, the resonant orbit has three distinct loops, which cannot be fit by a single-loop p-ellipse. However, the p-ellipse approximation appears as a low-radius boundary, and provides a fit only in the vicinity of the lowest radial excursion. That is a general result for such multi-loop orbits. The basic peak and trough phases of the numerical model are also captured by the analytic model. This example suggests that the subharmonic component is becoming much stronger as we decrease the value of fo. Figure 3. View largeDownload slide Like Fig. 1, but for the orbit determined by the parameter value fo = −0.2990, again in the δ = −0.3, δb = −0.2 potential with pattern speed Ωb = 0.315. The blue solid curve shows the numerically integrated orbit, and the red dashed curve in the upper panel is the analytic orbit in both panels. The sub-harmonic modulation is clear in the lower panel. Figure 3. View largeDownload slide Like Fig. 1, but for the orbit determined by the parameter value fo = −0.2990, again in the δ = −0.3, δb = −0.2 potential with pattern speed Ωb = 0.315. The blue solid curve shows the numerically integrated orbit, and the red dashed curve in the upper panel is the analytic orbit in both panels. The sub-harmonic modulation is clear in the lower panel. Slight variations in the initial angular velocity yields a family of orbits that are similar, but with thicker loops. With enough initial velocity deviation the three-loop form disappears, and the orbits are better described as a filled annulus between the p-ellipse and the outer loop. It is also true that initial velocity variations produce thicker loops in the previous examples. Thus, each closed, resonant orbit has a family of such offspring, extending over a finite interval of the initial value of $$\dot{\phi }$$. Fig. 4 shows a fourth example, with a still larger value of fo = −0.2986(nb = 1.0549), and the same pattern frequency. In this case, the derived analytic values are p = 22.90, e = 0.3305, and eb = 12.32. The predicted value of $$\dot{\phi }$$ was −0.4788, and the fitted value, −0.6120. Again, we see a three loop pattern, with the analytic orbit serving as a low radius boundary to the numerical orbit. The subharmonic frequency is longer relative to the analytic period than in the previous example, and their combined width is greater. It is clear that the centre of each loop is offset from the origin, in alternating directions along the x-axis. If we increase the value of fo to −0.2984, the analytic approximation breaks down entirely, yielding negative values of p, the radial scale. Between the current value of fo and that critical value, the overall orbit size and the spread between loops continues to get larger. The analytic eccentricity, and that of the loops, decreases. The offset of loop centres also increases. Figure 4. View largeDownload slide Like Figs 1 and 3, but for the orbit with fo = −0.2986. The orbit is similar to that in Fig. 3, but larger and broader. The lower panel shows that the subharmonic frequency is becoming dominant. Figure 4. View largeDownload slide Like Figs 1 and 3, but for the orbit with fo = −0.2986. The orbit is similar to that in Fig. 3, but larger and broader. The lower panel shows that the subharmonic frequency is becoming dominant. Given the large values of eb in these last two or three examples, a quantity assumed to be of order e in the perturbation expansion, it is not surprising that the analytic approximation breaks down. It is more surprising that it continues to provide some information as it breaks down. In sum, all of these sample orbits have the same pattern speed, so they, and their less regular offspring, could be combined to make a model bar. However, this model would require increasing bar strength (eb) with increasing radius. We might not expect the bar to extend beyond the radius where the orbit breaks into multiple loops, because the pattern becomes less distinct and more circular. Gaseous components would experience dissipation and circularization at loop meeting points. This radius apparently differs from the co-rotation radius usually thought to give the outer extent of bars. Alternately, if the needed bar strength occurs only over a limited range of radii, then a hollow annular stellar bar would be possible. In this latter case, if the value of eb changed with time the bar structure would evolve. For example, it could grow larger and wider as eb increases. 4.2.2 More steeply rising rotation curve examples In this subsection we consider a second set of examples drawn from a potential with a more steeply rising rotation curve, indeed close to a solid-body potential. Specifically, we take δ = −0.8 in the symmetric part of the potential and δb = −0.7 in the asymmetric part. The case with δ = −0.5 is a singular one, where the perturbation approach above breaks down. The character of the orbits is rather different for δ values on either side of this critical value. One difference is that the resonant, closed orbits generally only exist at higher pattern speeds than in the previous case. For the examples in this section, we take Ωb = 1.05. We will consider three orbits in this bar pattern. The first, shown in Fig. 5, is a large, but low eccentricity one, specified by the parameter value fo = −0.77. The numerical orbit has a small, but finite width, and some subharmonic modulation is visible in the lower panel. Not surprisingly, the analytic fit to this nearly circular orbit is very good. Figure 5. View largeDownload slide A sample orbit determined by the parameter value fo = −0.77(f1 = 1.025, f2 = 0.464) and in the potential specified by the values δ = −0.8, δb = −0.7 and pattern speed Ωb = 1.05, in the dimensionless units. In both panels the blue solid curve is the numerically integrated orbit, and the red dashed curve is the p-ellipse approximation. The upper panel is the view on to the disc in the pattern frame; the lower panel shows radius versus the negative azimuthal advance. Note the large size, and near circularity of the orbit. Figure 5. View largeDownload slide A sample orbit determined by the parameter value fo = −0.77(f1 = 1.025, f2 = 0.464) and in the potential specified by the values δ = −0.8, δb = −0.7 and pattern speed Ωb = 1.05, in the dimensionless units. In both panels the blue solid curve is the numerically integrated orbit, and the red dashed curve is the p-ellipse approximation. The upper panel is the view on to the disc in the pattern frame; the lower panel shows radius versus the negative azimuthal advance. Note the large size, and near circularity of the orbit. We skip over a range of large, low eccentricity orbits to the much smaller, and visibly flatter one in Fig. 6. Though flatter, this orbit does not pinch inward like the analytic approximation. The fit is not good, but we can improve it using the maximum and minimum radii of the numerical orbit to derive new values of e and p, as described for the Fig. 2 orbit. The revised fit is good and shown in the lower panels as a green dotted curve, which basically overwrites the blue numerical curve in the region of overlap. The thickness of the numerical orbit is larger relative to its mean radius than that of the orbit of Fig. 5, but it is still small. Figure 6. View largeDownload slide Like Fig. 5, but for the orbit with fo = −0.715(f1 = 1.007, f2 = 0.540) in the same potential, with the same pattern speed. The blue solid curve shows the numerically integrated orbit in both panels. The red-dashed curve in the upper panel is the analytic orbit, which does not yield a good fit in this case. The green-dotted curve in the lower panels is the analytic curve, corrected with a lower eccentricity and as further described in the text. Figure 6. View largeDownload slide Like Fig. 5, but for the orbit with fo = −0.715(f1 = 1.007, f2 = 0.540) in the same potential, with the same pattern speed. The blue solid curve shows the numerically integrated orbit in both panels. The red-dashed curve in the upper panel is the analytic orbit, which does not yield a good fit in this case. The green-dotted curve in the lower panels is the analytic curve, corrected with a lower eccentricity and as further described in the text. The value of the parameter fo is changed only slightly between the cases shown in Figs 6 and 7, but the effect is significant. Specifically, the orbit is beginning to get much thicker (and this trend accelerates for yet larger values of fo). The fit of the analytic orbit is worse than in Fig. 6. In this orbit, and others not shown with lower values of fo, the analytic curve approximates a portion of the inner boundary of the numerical orbit (as in Figs 3 and 4). While the orbits get wider as fo is decreased, their inner boundary gets smaller and flatter only slowly. The lower panel of Fig. 7 shows that the subharmonic modulation is becoming dominant as in the more extreme examples of the previous subsection. In fact, two subharmonics are visible, one at twice the basic frequency, as well as the lower frequency one. Figure 7. View largeDownload slide Like Fig. 6, but for the orbit with fo = −0.71 in the same potential, with the same pattern speed. In the lower panel the analytic orbit has been omitted for clarity. The subharmonic pattern dominates. Figure 7. View largeDownload slide Like Fig. 6, but for the orbit with fo = −0.71 in the same potential, with the same pattern speed. In the lower panel the analytic orbit has been omitted for clarity. The subharmonic pattern dominates. 4.3 Generalizations It is interesting to compare the orbit sequences of the last two subsections. The flattest orbits are the smallest in both cases, with larger orbits becoming more nearly circular. In none of the cases above are the orbits even close to the flatness apparent in some observed bars. Interestingly, this statement does not apply to the analytic bars of the case in Section 4.2.1. Fig. 2 hints at how these can get much longer and flatter than the corresponding ‘true’ numerical orbit. This, along with other results described below and in the literature, suggests that if closed orbits play a significant role in flat bars, they are not simple loops. If that is true, then it is also likely that shocks in the gas also play an important role in such bars. The two sequences above also share the characteristic that at one end of the range of allowable fo values the subharmonic modulation becomes very strong, and the relative width of the orbit grows as fast as or faster than the relative change in the mean radius. The two sequences differ, however, in which end of the spectrum shows this phenomenon. In the first sequence (Section 4.2.1), it occurs at the smaller values of fo, where the orbits are large and more circular. In the second sequence (Section 4.2.2), it occurs at larger values of fo, where the orbits are smaller and flatter. This behaviour reversal seems to occur generally across the δ = −0.5 singularity, based on additional cases not presented here. It should also be noted that there appears to be a large region in the (δ, δb, Ωb) parameter space where simple closed loop orbits do not exist. The cases with low pattern speeds, and values of δ, δb < −0.5, have already been mentioned. This also seems to be true for falling rotation curves with values of δ, δb > 0.0. We have not explored this parameter space extensively, so these conclusions are preliminary. The conclusion that bars are more likely in regions with rising rotation curves, an extrapolation of these loop orbit results, does seem in accord with observational and modelling results (e.g. Sellwood 2014). On the other hand, loop-like orbits are found numerically in falling rotation curve potentials in parameter regions where the perturbation approximation fails to give solutions. This will be explored in a sequel paper. 5 LOOP BARS IN GENERAL POTENTIALS In the previous section we considered examples of sets of closed loop orbits in symmetric and bar-like power-law potentials. In the case of a single value of the bar potential amplitude, eb in the disc, there is generally one such orbit, though zero to two closed loop orbits are allowed by the perturbation theory in Section 3.1. However, the numerical results of the last section indicate that one such orbit is the most common result, and this result extends to quite high eccentricity. If the magnitude of eb increases outward, there can be a number of nested near-loop orbits, which could form the skeleton of a bar. However, the potentials of galaxy discs are not well described by a single power law in radius, but rather have rising rotation curves in the inner parts and flat or falling rotation curves in the outer regions. In this section we consider another series of loop orbits in a potential with varying power-law indices, as an example of a potential approximated by a sum of power laws. The main point of this example is to show that results like those of the previous section can be generalized to potentials that can be approximated by such variable power-law forms. Fig. 8 shows four representative orbits in this example. The assumed rotation curve rises moderately steeply in the inner regions, but transitions to a flat rotation curve in the outer regions. The equipotentials are also assumed to be of about the same shape as the loop orbits. The values of the potential indices from the inner orbit outwards are δ = δb = ( − 0.25, −0.20, −0.15, 0.0). That is, we assume that local power-law potential approximations to the potential have these index values and that δ = δb throughout. The value of the pattern speed is Ωb = 0.315. From the inside out, the values of (p, e, eb) for the analytic approximations to the inner three orbits are (4.11, 0.826, 1.97), (5.19, 0.495, 2.53), (6.36, 0.306, 3.31), and (7.03, 0.108, 4.89). Figure 8. View largeDownload slide A sequence of four closed orbits with the same pattern speed, Ωb = 0.315. The three inner bar-like orbits derived by numerical integration are shown by thick, blue curves, and their analytic approximations are shown by thin, red curves. The fourth, outermost dotted orbit consists of three loops, and no analytic approximation is shown. The initial conditions $$(r_o, \dot{\phi _o})$$ are (3.53, −1.23), (4.60, −1.01), (5.80, −0.837), and (6.68, −0.657). Figure 8. View largeDownload slide A sequence of four closed orbits with the same pattern speed, Ωb = 0.315. The three inner bar-like orbits derived by numerical integration are shown by thick, blue curves, and their analytic approximations are shown by thin, red curves. The fourth, outermost dotted orbit consists of three loops, and no analytic approximation is shown. The initial conditions $$(r_o, \dot{\phi _o})$$ are (3.53, −1.23), (4.60, −1.01), (5.80, −0.837), and (6.68, −0.657). Despite their varying eccentricity, the three nested innermost orbits represent a clear bar structure. Between them and the outermost orbit there will be orbits with increasing thicknesses (and decreasing eccentricities), like those of Figs 3 and 4. As the potential index decreases towards δ = 0.0 with increasing radius, it becomes harder to get single-loop orbits. Multiple (but not necessarily three) looped orbits become the rule at large radius. The one shown in Fig. 8 is particularly interesting because it represents a set derived from a small range of initial radii ro that have an inner loop, which partially overlaps a significant part of one of the simple inner loops. On the other hand, the outer loop of this orbit is nearly circular. A star traversing the relevant part of the inner loop would, in some sense, look like it was pursuing a simple bar orbit. However, on the outer loop the star would look like it was on a circular orbit well outside the bar. Such orbits do not seem to have been studied in the bar literature (see e.g. the reviews of Athanassoula 2013; Sellwood 2014), though orbits with small loops at their ends are common. (However, in a recent paper Christodoulou & Kazanas 2017 find similar orbits in spherical potentials.) We note that it is a natural extension of the analytic approximation (via varying the value of fo) that leads to them. As in the examples of the previous section, such closed, resonant orbits have a family of nearby (in the space of initial conditions) orbits that do not close. Evidently, their non-circularity would introduce a significant component of apparent velocity dispersion into the outer disc, beyond that of small bar perturbations of near circular orbits of large radii. They seem worthy of further study. In sum, though the example discussed in this section is ad hoc, it shows that a loop orbit skeleton of a model bar can be constructed in a potential more complex than that consisting of monotonic power laws in both symmetric and asymmetric parts. The well-studied Ferriers bars provide more examples (see Athanassoula 1992). It also shows, as mentioned above, such a model bar effectively ends as the rotation curve becomes flat. This does not appear to be directly related to a co-rotation radius. 6 THE POSSIBILITIES FOR SELF-CONSISTENT LOOP ORBIT BARS The gravitational potentials considered in the previous section were fixed, and presumably of external origin. We sought simple, closed, bar-like orbits in those potentials that could be analytically approximated with p-ellipses to second order in the eccentricity parameter. In this section we ask whether the bar potential itself could be constructed from such orbits? We will see that there are complications, and such bars are likely to be rare or non-existent, at least in two dimensions. 6.1 Poisson equation and constraints To construct bar potentials from loop orbits, we use the two-dimensional Poisson equation, which can be written in dimensionless units as   \begin{eqnarray} \triangledown ^2 \Phi = \frac{1}{r} \frac{\mathrm{\partial} }{\mathrm{\partial} r} \left( r \frac{\mathrm{\partial} \Phi }{\mathrm{\partial} r} \right) + \frac{1}{r} \frac{\mathrm{\partial} }{\mathrm{\partial} \phi } \left( \frac{1}{r} \frac{\mathrm{\partial} \Phi }{\mathrm{\partial} \phi } \right) = 4\pi \rho , \end{eqnarray} (36)where the scale factors are as in equation (4), with addition of the following scales for the potential and density,   \begin{eqnarray} \Phi _\epsilon = c \frac{\epsilon ^2}{\tau ^2}, \ \ \rho _\epsilon = \frac{M_\epsilon }{\epsilon ^3}. \end{eqnarray} (37)However, in the limit that bar does not have a significant effect on the halo potential, these two parts of the potential decouple, and the Poisson equation above can be assumed to describe the bar alone. Then we substitute the asymmetric potential term from equation (2) to get the following expression for the mass density,   \begin{eqnarray} 4\pi \rho = \frac{4ce_b}{r^{2(1+\delta _b)}} \left( 1 - \delta _b^2 \right)\text{cos}(2\phi ), \end{eqnarray} (38) For this simple example, we assume a stationary bar, Ωb = 0.0. We assume that we can construct the density field of the bar with an appropriate radial distribution with nested p-ellipse orbits (in the rotating frame). Then we can use the p-ellipse equation to eliminate factors of r in the above. To second order, the expression for ρ becomes   \begin{eqnarray} \rho = a_o\ e\text{cos}(2\phi ) \left[1 + a_1 e \text{cos}(2\phi ) \right], \end{eqnarray} (39)with   \begin{eqnarray} a_o = \frac{c(1 - \delta _b^2)}{\pi p^{2(1+\delta _b)}} \frac{e_b}{e}, \ \ a_1 = 2(1+\delta _b)\left( \frac{1}{2}+\delta \right). \end{eqnarray} (40)If e varies slowly with radius, then the radial dependence of the density is given by the $$p^{2(1+\delta _b)}$$ term. To relate the angular dependence of the density contributed by set of adjacent orbits to the azimuthal velocity on the orbits, we can assume that the more time a star spends on a given part of its p-ellipse orbit the greater its contribution to the density at that azimuth. Specifically, assume that the relative density change compared to that at ϕ = 0 on a given part of the orbit equals the opposite of the relative azimuthal velocity change. That is,   \begin{eqnarray} \frac{\rho (\phi ) - \rho _{\phi = 0}}{\rho _{\phi = 0}} &=& -\left( \frac{\dot{\phi }({\phi }) - \dot{\phi }_{\phi = 0}}{\dot{\phi }_{\phi = 0}} \right),\nonumber\\ or, \ \frac{\rho }{\rho _{\phi = 0}} &=& 2 - \frac{\dot{\phi }}{\dot{\phi }_{\phi = 0}} \end{eqnarray} (41)Now we can substitute equation (15) for $$\dot{\phi }$$, equate our two expressions for the density (equations (39) and (41)), and identify terms of common order in ecos(2ϕ). This yields, the following zeroth-, first-, and second-order equations, after some manipulation,   \begin{eqnarray} &&{f_o = 2ef_1 - 2e^2f_2,}\nonumber\\ &&{\frac{c(1 - \delta _b^2)}{\pi p^{2(1+\delta _b)}} \frac{e_b}{e} = \rho _{\phi = 0} \left( \frac{-f_1}{f_o + ef_1 + e^2f_2} \right),}\nonumber\\ &&{ (1+\delta _b)\left( \frac{1}{2}+\delta \right) = \frac{f_2}{2f_1}.} \end{eqnarray} (42)These equations provide strong additional constraints for self-gravitating loop bars. The kinematic orbits, discussed in previous sections, were not so constrained. Free parameters included eb (or fo), δb, δ and the pattern speed Ωb, though closed loops generally only existed for isolated values of fo. As discussed in the next subsection, the extra constraints eliminate most of these solutions. 6.2 Self-consistent loop orbit bars? Already, in the non-self-gravitating cases above, we found large areas of parameter space with no physical orbit solutions to the perturbation equations, and the lack of closed loops in the parameter space neighbourhood was confirmed with numerical orbit integrations. With the additional constraints from the Poisson equation, the regions of parameter space with loop orbits seem to be very small. This is evident just from the first of equations (42), which provides another relation between the fi coefficients of the azimuthal velocity (and e). For example, we can follow the procedure of the previous sections to determine a value of fo iteratively for given values of δb, δ, and Ωb that yields a loop orbit if it exists. Then the values of f1, f2 and the eccentricity parameter are also determined. However, the odds that these values incidentally satisfy the first of equations (42) will generally be very small. This consideration alone eliminates most of the loop solutions of the previous sections. The second of equations (42) can be viewed as an expression for the density variation along the ϕ = 0 axis, and so, does not constrain the solutions. The third equation of the set is constraining in several ways. The first is that since it gives a relation between δ and δb, so one free parameter is eliminated. Loop orbits were found previously only for certain values of those two variables (for a given pattern speed), and those solutions which do not happen to also solve this third condition will be eliminated. This constraint is not as stringent as that imposed by the first equation, since a wide range of δ, δb values produce loops. This third equation also provides some more detailed constraints. Recall that the physical range for the values of δ and δb is about −1.0 to 0.5. The value of the factor 1 + δb in that equation is never negative over this range of δb values. On the other hand, the values of f1 and f2 can have either sign. For example, in the case shown in Figs 1–4, where δ, δb > −0.5, f1 < 0 and f2 > 0. With these values the left side of the third equation is positive and the right negative, so solutions like those of Figs 1–4 cannot satisfy the Poisson constraints. We must have δb < −0.5 when f2/f1 < 0. The orbits shown in Figs 5–7 provide another example with δ, δb < −0.5. Here both f1 and f2 have positive values, so with these values the third equation is again violated, and more orbit solutions are precluded. Although we only have examples, not a rigorous proof, it appears that if the values of δ and δb are close to each other, then the Poisson constraint is violated. A third example is when δb < −0.5. Then the factor (1 + δb) will be small, and since the ratio f2/f1 is often of order unity, the factor f2/(2(1 + δb)f1) is likely to be of order unity or larger. Yet if δ > −0.5, then (0.5 + δ) is less than unity (unless the value of δ is near 0.5), so the Poisson constraint is not satisfied. (We have found cases like this, that yield kinematic loop orbits, but did not describe them above.) A more systematic analysis of the various cases could be done, but these several examples make the point that the third of equations (42) is very constraining. Thus, we conjecture that a self-consistent bars can be constructed from loop orbits alone, in two dimensions, only in rare, or in no cases. Evidently, the orbits in self-consistent bars must be more complex in two dimensions. Moreover, it appears likely that the restrictions above would also apply to loop orbits with small librations; these are also not sufficiently complex. The restrictions above may, however, be loosened in a three-dimensional bar. Consider a very simple example of a cylindrical bar consisting of loops parallel to the disc plane, but not all lying within that plane. For example, suppose their vertical distribution was described by an exponential term, $$e^{-|z|/z_o}$$, in the density and potential. The z-derivatives of the three-dimensional Poisson equation would introduce $$z_o^2$$ terms in the last two constraint equations (42). Then the last of these constraints could be viewed as an equation for this new parameter. If zo was a function of radius, then more parameters, describing this variation, would be introduced. These additional parameters would allow a broader range of solutions, and at least in principle, allow for cylindrical loop bars. We will not explore this topic further here. 7 RAMIFICATIONS FOR TIDAL AND GASEOUS BARS The possibility of generating bars in flyby galaxy collisions is natural because both the tidal force and the bar have the same basic symmetry. Noguchi (1987, 1988) first investigated this with numerical hydrodynamical simulations (also see Barnes & Hernquist 1991). The simulations of Gerin, Combes, & Athanassoula (1990) demonstrated that the process can either strengthen or weaken pre-existing bars (also see Miwa & Noguchi 1998). Moetazedian et al. (2017) and Zana et al. (2018) showed that interactions with small companions can result in delayed bar formation. Berentzen et al. (2004) found that bars regenerated in stellar discs, but not in dissipative discs. Instead the gas was efficiently funnelled to the central regions. Thus, modelling to date suggests that the formation of long-lived stellar bars can be triggered in interactions, but not gaseous bars. What about gaseous bars in isolated, but bar-unstable discs? Several published high-resolution simulations in the literature partially address this question. For example, the models of Mayer & Wadsley (2004) produce small, eccentric, long-lived, gas bars, contained within larger stellar bars. The images of Finali et al. (2015) and Spinoso et al. (2017) suggest similar results, i.e. very small and weak gas bars, though even these do not appear as long-lived as in Mayer & Wadsley (2004). Finali et al. (2015) report that gas is emptied rapidly in a dead zone between co-rotation and inner Lindblad resonances, making it hard to feed nuclear activity at later times. These high-resolution results accord with several of the findings above for bars made of nested loop orbits. Specifically, that nested, non-intersecting orbits can exist in asymmetric external potentials, but that the most eccentric resonant orbits are relatively small, and that the strength of the asymmetric part of the external potential is relatively large. The decay of these model gas bars also agrees with the result that self-gravitating bars are unlikely to be stable and long-lived. The model of Renaud et al. (2015) shows a younger bar than in most previous published simulations. The gaseous part of this bar consists of a thin elliptical annulus, with spiral-like waves on the inside and outside. The inner spirals meet the annular bar near the minor access of the latter. The morphology of this model gas bar suggests that it might consist of a group of orbits like those in Fig. 9, with crossings of between orbits outside a very narrow range of parameters resulting in spiral waves. To return to the case of interaction induced bars, the orbital results of the previous sections provide a basis for the following picture of tidal bar evolution. First, in a prolonged prograde encounter, the large-scale coherence of the perturbing potential may excite nested eccentric resonant orbits with a common pattern speed, like those in Fig. 8. As explained in Section 5 the detailed structure of these orbits will depend on both the shape of the symmetric potential in the disc and the perturbing potential. In the gas, dissipation in nearby, non-resonant orbits may synchronize with the resonant orbit. The dissipation resulting from orbits much different than the resonant ones will drive radial flows, e.g. inside and outside an annular bar. When the companion galaxy leaves, or merges without substantially disrupting the bar, the external asymmetric force disappears, but the stars on resonant orbits may maintain a kinematic bar for some time. On the other hand, if the magnitude of the external potential was substantial, then its disappearance will perturb the orbits. If the bar has significant self-gravity, then it will have more crossing orbits, and dissipation in any remaining gas will aid its dissolution. 8 SUMMARY AND CONCLUSIONS One major theme of this paper centres on the questions of when, or in what potentials, simple, closed loop orbits exist over a range of radii, as they evidently can in the well-studied Ferrer's potentials. When this is the case there exists a dense, nested ensemble of such orbits that could themselves serve as a simple model of the bar. Moreover, these loop orbits generally will be parents of more complex orbit families, as functions of the orbit parameters. A second theme concerns the usefulness of scale-free, power-law potentials, viewed as elementary forms from which more complex potentials could be constructed. The orbit structure of these simple potentials is less complex than many of those used in numerical simulations. In the above, we used p-ellipse functions to find and approximate the simplest orbits in these potentials, in a perturbation limit. The p-ellipse approximation was originally found to provide very good fits to precessing, eccentric orbits in symmetric power-law potentials, as these fits do not drift with time (Struck 2006), since the approximation tracks the orbital precession. It is reasonable to expect that the approximation might also be useful for fitting resonant loop orbits (i.e. closed orbits in the pattern frame) in potentials with a modest asymmetric component, like a bar potential. Since in such orbits the ratio of the precession and pattern frequencies are rational numbers, it seems that any resonant orbit should have a modified precession rate that could be captured by a p-ellipse approximation. This conjecture was shown to be correct in Section 4, where closed orbits in the pattern frame were found to be well approximated by p-ellipses (see Figs 1, 2, and 5). Good fits were found in potentials with both shallow and steeply rising rotation curves. Differences and similarities between the orbits in these two cases were discussed in Section 4.3. One of the most interesting results is that p-ellipse, loop (m = 2) orbits are not predicted to exist in (power-law) falling rotation curve potentials by the perturbation analysis. However, loop orbits are found numerically in such potentials. The second-order p-ellipse solutions of Section 3 seem to fail in these potentials. This phenomenon will be explored in a later paper. Even in the case of rising rotation curve potentials, the exploratory calculations of Section 4 show complexities. Fig. 1 shows the closed, resonant orbit in a sample case. Fig. 2 shows that even a small deviation from the initial conditions of Fig. 1 yields a smaller, unclosed, and more eccentric orbit. Although it cannot capture the small librations of this orbit, the p-ellipse approximation is still a good fit to the mean orbit. Figs 3 and 4 show that further small variations in the initial conditions yield progressively larger, and rounder orbits. These orbits are characterized by clear subharmonic frequencies. The p-ellipse approximation can only capture the innermost loops of these orbits. Given the proximity to the resonant orbit, the subharmonics may be the result of ‘beating’ between the precession and pattern frequencies. These subharmonics may also relate to the β frequency in the epicyclic perturbation analysis of Binney & Tremaine (2008, section 3.3). The initial conditions around the simple closed loop are dense with resonances between the subharmonic and primary orbital frequency, which produce the closed multi-loop orbits like those in Figs 3 and 4. Multi-frequency p-ellipse approximations to the closed, multi-loop orbits, using the equations of Section 3.2, will be investigated in a future paper. Although the resonant, multi-loop orbits cannot be modelled by simple, p-ellipses, solutions to the p-ellipse constraint equations guide us to good estimates of their initial conditions by simply varying the parameter fo. Resonant orbits from a small region of parameter space, like those in Figs 1–4, can be combined to produce a model bar in the given potential. This provides a different technique to the more traditional one of constructing orbits via perturbation ellipses around Lagrange points in a given potential (see Binney & Tremaine 2008, section 3.3). Figs 4 and 8 show a surprising feature of closed, multi-loop orbits. Half of their innermost loops can be well fit by a p-ellipse with nearly the same initial conditions, and these segments could support the bar. However, their outermost loops are generally much more circular, and in isolation would look like segments of a disc orbit unrelated to the bar. In a given power-law potential (specified by the values of δ and δb) each approximate, resonant orbit requires a specific value of the asymmetric amplitude eb to satisfy the equation of motion constraints. For an ensemble of such orbits, making up a model bar, a specific radial variation of eb is required. It is unlikely that the needed pattern of eb would obtain for arbitrary initial disc structures. However, gas clouds will prefer more or less concentric loop orbits. Dissipative processes might drive bar parameters (e.g. a combination of external and internal contributions to the asymmetric potential) to values that support the required variation. Note that the constraint equations are such that the required values of eb do not depend on the pattern speed. The zeroth-order orbital frequency parameter, fo, does. An alternative would be radial variations of the potential profile index (δ or δb). Such variations must be small, or the perturbation equations have to be modified. An example was discussed in Section 5 and illustrated in Fig. 8. If, as in this example, the rotation curve becomes flat or slightly declining, subharmonic frequencies become dominant, and we get large, multi-loop orbits. Going beyond external asymmetric potentials, in the construction of a model self-gravitating bar the Poisson eq. can be viewed as a prescription for the density distribution of concentric loop orbits making up the bar (see Section 6.1). It also imposes strong additional constraints in a perturbation approximation. In fact, orbital solutions to the perturbation equations with the additional constraints from the Poisson equation given in Section 6.1 seem to be very rare. Evidently two-dimensional bars must be based on more complex orbits, or perhaps most bars have a three-dimensional structure. Altogether, these results suggest a number of interesting general conclusions. Perhaps the most important of these is that it is hard to make model bars from single-loop orbits alone, and thus, bars and oval distortions made primarily from gas are unlikely to form in galaxy discs. This conclusion is not too surprising since bars are observed to be primarily stellar, and an extensive literature of models shows that they generally have a wide range of orbits, including chaotic ones (e.g. Contopoulos 2002; Weinberg 2015a,b and references therein). On the other hand, the analytic and numerical explorations above provide some insights as to why this is so. These insights include the fact that resonant, loop orbits (approximated by p-ellipses) are only found in limited regions of parameter space, at least in the perturbation approximations of Section 3. In non-self-gravitating cases, these solutions tend to have large values of the asymmetric amplitude eb, suggesting the external potential has a strong asymmetric part. Technically, this violates one of the perturbation approximations, which assumed that eb ≃ e, but the good fits between numerical and analytic orbits at low to moderate values of e suggests the consequences are not serious. The bar orbits illustrated above are wide, even at quite high values of the eccentricity parameter e. This suggests that more complex orbits (e.g. like those in described in Williams & Evans 2017) are needed to support narrow bars. Loop orbit solutions in two-dimensional self-gravitating bars are at best very rare. It is not surprising that significant self-gravity would lead to more complex orbits. This may be a factor in understanding why bars are effective at quenching gas-rich discs (e.g. Khoperskov et al. 2018). Beyond this initial exploration, there are many directions to pursue using p-ellipse approximation tool for the study of orbits in galaxies aysymmetric potentials. For example, p-ellipses approximations with multiple frequencies likely converge much more quickly than conventional Taylor expansions in cos(ϕ). Specifically, the case of librating p-ellipse orbits with an additional subharmonic frequency will be described in a later paper. ACKNOWLEDGEMENTS I am very grateful for the insights gained from a correspondence over the last decade on orbits in galaxies with the late Donald Lynden-Bell. I acknowledge the use of NASA’s Astrophysics Data System. REFERENCES Athanassoula E., 1992, MNRAS , 259, 328 https://doi.org/10.1093/mnras/259.2.328 CrossRef Search ADS   Athanassoula E., 2013, in Falcón-Barroso J., Knapen J. H., eds, Secular Evolution in Galaxies , Cambridge University Press, Cambridge, p. 305 Google Scholar CrossRef Search ADS   Barnes J. E., Hernquist L. E., 1991, ApJ , 370, L65 https://doi.org/10.1086/185978 CrossRef Search ADS   Berentzen I., Athanassoula E., Heller C. H., Fricke K. J., 2004, MNRAS , 347, 220 https://doi.org/10.1111/j.1365-2966.2004.07198.x CrossRef Search ADS   Bertin G., 2000, Dynamics of Galaxies . Cambridge Univ. Press, Cambridge Binney J., Tremaine S., 2008, Galactic Dynamics . Princeton Univ. Press, Princeton, NJ Christodoulou D. M., Kazanas D., 2017, pre-print (arXiv:1707:04937) Contopoulos G., 2002, Order and Chaos in Dynamical Astronomy . Springer, New York Google Scholar CrossRef Search ADS   Contopoulos G., Grosbol P., 1989, A&ApRv , 1, 261 Contopoulos G., Mertzanides C., 1977, A&A , 61, 477 Ernst A., Peters T., 2014, MNRAS , 443, 2579 https://doi.org/10.1093/mnras/stu1325 CrossRef Search ADS   Fanali R., Dotti M., Fiacconi D., Haardt F., 2015, MNRAS , 454, 3641 https://doi.org/10.1093/mnras/stv2247 CrossRef Search ADS   Freeman K., 1966a, MNRAS , 133, 47 https://doi.org/10.1093/mnras/133.1.47 CrossRef Search ADS   Freeman K., 1966b, MNRAS , 134, 1 https://doi.org/10.1093/mnras/134.1.1 CrossRef Search ADS   Freeman K., 1966c, MNRAS , 134, 15 https://doi.org/10.1093/mnras/134.1.15 CrossRef Search ADS   Gajda G., Łokas E. L., Athanassoula E., 2016, ApJ , 830, 108 https://doi.org/10.3847/0004-637X/830/2/108 CrossRef Search ADS   Gerin M., Combes F., Athanassoula E., 1990, A&A , 230, 37 Jung C., Zotos E. E., 2015, PASA , 32, e042 CrossRef Search ADS   Khoperskov S., Haywood M., Di Matteo P., Lehnert M. D., Combes F., 2018, A&A , 609, A60 Lynden-Bell D., 1979, MNRAS , 187, 101 https://doi.org/10.1093/mnras/187.1.101 CrossRef Search ADS   Lynden-Bell D., 1996, in Barred Galaxies and Circumnuclear Activity, Lecture Notes in Physics, 474 , eds., Aa Sandquist P. O. Lindblad. Springer, New York, p. 7 Google Scholar CrossRef Search ADS   Lynden-Bell D., 2010, MNRAS , 402, 1937 https://doi.org/10.1111/j.1365-2966.2009.16019.x CrossRef Search ADS   Manos T., Machado R. E. G., 2014, MNRAS , 438, 2201 https://doi.org/10.1093/mnras/stt2355 CrossRef Search ADS   Mayer L., Wadsley J., 2004, MNRAS , 347, 277 https://doi.org/10.1111/j.1365-2966.2004.07202.x CrossRef Search ADS   Miwa T., Noguchi M., 1998, ApJ , 499, 149 https://doi.org/10.1086/305611 CrossRef Search ADS   Moetazedian R., Polyachenko E. V., Berczik P., Just A., 2017, A&A , 604, A75 Noguchi M., 1987, MNRAS , 228, 635 https://doi.org/10.1093/mnras/228.3.635 CrossRef Search ADS   Noguchi M., 1988, A&A , 203, 259 Renaud F. et al.  , 2015, MNRAS , 454, 3299 https://doi.org/10.1093/mnras/stv2223 CrossRef Search ADS   Sellwood J. A., 2014, Rev. Mod. Phys. , 86, 1 https://doi.org/10.1103/RevModPhys.86.1 CrossRef Search ADS   Sellwood J. A., Wilkinson A., 1993, Rep. Prog. Phys. , 56, 173 https://doi.org/10.1088/0034-4885/56/2/001 CrossRef Search ADS   Spinoso D., Bonoli S., Dotti M., Mayer L., Madau P., Bellovary J., 2017, MNRAS , 465, 3729 https://doi.org/10.1093/mnras/stw2934 CrossRef Search ADS   Struck C., 2006, AJ , 131, 1347 https://doi.org/10.1086/500196 CrossRef Search ADS   Struck C., 2015a, MNRAS , 446, 3139 https://doi.org/10.1093/mnras/stu2342 CrossRef Search ADS   Struck C., 2015b, MNRAS , 450, 2217 https://doi.org/10.1093/mnras/stv830 CrossRef Search ADS   Valluri S. R., Wiegert P. A. Drozd J., Da Silva M., 2005, MNRAS , 427, 2392 https://doi.org/10.1111/j.1365-2966.2012.22071.x CrossRef Search ADS   Valluri M., Shen J., Abbott C., Debattista V. P., 2015, ApJ , 818, 141 https://doi.org/10.3847/0004-637X/818/2/141 CrossRef Search ADS   Weinberg M. D., 2015a, preprint (arXiv:1508.06855) Weinberg M. D., 2015b, preprint (arXiv:1508.05959) Williams A. A., Evans N. W., 2017, MNRAS , 469, 4414 https://doi.org/10.1093/mnras/stx1198 CrossRef Search ADS   Zana T., Dotti M., Capelo P. R., Bonoli S., Haardt F., Mayer L., Spinoso D., 2018, MNRAS , 473, 2608 © 2018 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society ### Journal Monthly Notices of the Royal Astronomical SocietyOxford University Press Published: May 1, 2018 ## You’re reading a free preview. Subscribe to read the entire article. ### DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month \$360/year Save searches from PubMed Create lists to Export lists, citations
proofpile-shard-0030-124
{ "provenance": "003.jsonl.gz:125" }
# MacLaurin’s inequality Let $a_{1},a_{2},\ldots,a_{n}$ be positive real numbers , and define the sums $S_{k}$ as follows : $S_{k}=\frac{\displaystyle\sum_{1\leq i_{1} Then the following chain of inequalities is true : $S_{1}\geq\sqrt{S_{2}}\geq\sqrt[3]{S_{3}}\geq\cdots\geq\sqrt[n]{S_{n}}$ Note : $S_{k}$ are called the averages of the elementary symmetric sums This inequality is in fact important because it shows that the arithmetic-geometric mean inequality is nothing but a consequence of a chain of stronger inequalities Title MacLaurin’s inequality MacLaurinsInequality 2013-03-22 13:19:28 2013-03-22 13:19:28 Mathprof (13753) Mathprof (13753) 7 Mathprof (13753) Definition msc 26D15
proofpile-shard-0030-125
{ "provenance": "003.jsonl.gz:126" }
# Tracking unknown number of signals/coordinates I have a time series data. Where for every time step I have a varying number of object coordinates. I want to track these over time. There might also be some missing coordinates for a few frames. Ideally, given the time series input and I am looking to find out a list of tracks and their respective start and end time. What kind of algorithms am I supposed to use? Are there any implementations available. Your problem falls into the category of problems known as $Multitarget\; Tracking$. Are there algorithms?, you Betcha there are algorithms. This is an active area of research. IEEE Explore returns 1,620 hits for (multitarget tracking) The optimal algorithm is known as Reid's Multi Hypothesis Tracker (MHT), which unfortunately requires the exhaustive enumeration of an exponentially growing set of hypothesis. There are various heuristics to keep thing manageable. Another contender is Bar Shalom's Joint Probability Data Association Filter (JPDA) which isn't optimal, but is more manageable. There are numerous other algorithms as well. Fundamentally, the general problem is NP hard. The essential problem is that if each measurement was labeled from each source, the data could directly update something like a Kalman Filter for each object. Unfortunately, data is not labeled, and there are also false and missed measurements. Most approaches attempt the data association anyway. Most implementations are closely held, As far as Matlab code goes, you can try. https://www.mathworks.com/matlabcentral/fileexchange/43526-multiple-target-tracking-with-multiple-observations Most free code that I've seen only do data association over a single measurement epoch, not over multiple epochs. If you want to look at books, Multiple-target Tracking with Radar Applications Book by Samuel S. Blackman Multitarget-Multisensor Tracking Book by Yaakov Bar-Shalom If you don't want to turn this problem into a lifelong obsession, (too late for me) , I would suggest doing a k-means for a reasonable number of k's, at each time instance. formulate a heuristic of what looks best, and then update a set of something like a set of Kalman Filters. This will work until you have crossing targets, and then you have to come up with another heuristic. Normally a filter can be deduced from the Bayes equation, which deals with random variables. Ronald Mahler has generalized the Bayes equation to deal with random finite sets (RFS), where each element, if any, might be a tracking vector. With this RFS algebra, hi has proposed a filter (PHD-Filter) which handles all current tracks and all available measurements at once, with no need of heuristics and data association steps. Bar Shalom himself has published comments on it, and several authors have contributed with even more robust filters. Some results are available at youtube, like this or this.
proofpile-shard-0030-126
{ "provenance": "003.jsonl.gz:127" }
# Find the minimal number of guard points of polygon Given a polygon with $n$ vertices, what is the minimal number of points inside the polygon such that for each interior point there exists at least one point such that the segment between them lies inside the polygon? If the polygon is convex, one point is enough (any point inside the polygon). - I don't have it with me right now, but if I recall correctly the solution (with proof) is given in this book. –  Jonathan Christensen Jan 28 '13 at 16:35 For a given polygon (as opposed to just the worst case over all polygons of size $n$) this is well-known to be NP-hard, even to approximate. See Wikipedia. –  Erick Wong Jan 28 '13 at 16:38 The number is $\displaystyle \left\lfloor\frac{n}3\right\rfloor$, meaning that this number always suffices, and there are polygons for which it is needed. This is Chvátal's Art Gallery Theorem from 1975. The question was originally asked by Klee in 1973. • Of course, one of the three colors is used at most one-third of the time (that is, at most $\displaystyle \left\lfloor\frac{n}3\right\rfloor$ vertices use this color). Place guards on these vertices. (A small perturbation verifies that we can replace these guards by nearby guards in the interior of the polygon.)
proofpile-shard-0030-127
{ "provenance": "003.jsonl.gz:128" }
Corpus ID: 119570869 # Extremal loop weight modules for $U_q(\hat{sl}_\infty)$ @inproceedings{Mansuy2013ExtremalLW, title={Extremal loop weight modules for \$U_q(\hat\{sl\}_\infty)\$}, author={Mathieu Mansuy}, year={2013} } • Mathieu Mansuy • Published 2013 • Mathematics • We construct by fusion product new irreducible representations of the quantum affinization $U_q(\hat{sl}_\infty)$. The action is defined via the Drinfeld coproduct and is related to the crystal structure of semi-standard tableaux of type $A_\infty$. We call these representations extremal loop weight modules. The main motivations are applications to quantum toroidal algebras $U_q(sl_{n+1}^{tor})$: we prove the conjectural link between $U_q(\hat{sl}_\infty)$ and $U_q(sl_{n+1}^{tor})$ stated in… CONTINUE READING Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv 1 #### References ##### Publications referenced by this paper. SHOWING 1-10 OF 26 REFERENCES ## QUANTUM EXTREMAL LOOP WEIGHT MODULES AND MONOMIAL CRYSTALS VIEW 8 EXCERPTS ## Extremal loop weight modules and tensor products for quantum toroidal algebras VIEW 10 EXCERPTS ## The algebra Uq(slˆ∞) and applications VIEW 10 EXCERPTS HIGHLY INFLUENTIAL ## Crystal bases of modified quantized enveloping algebra VIEW 16 EXCERPTS HIGHLY INFLUENTIAL ## Representations of quantum toroidal gln • Mathematics • 2012 VIEW 4 EXCERPTS HIGHLY INFLUENTIAL ## On level-zero representation of quantized affine algebras VIEW 11 EXCERPTS HIGHLY INFLUENTIAL ## Quiver varieties and finite dimensional representations of quantum affine algebras VIEW 4 EXCERPTS HIGHLY INFLUENTIAL ## The q-characters of representations of quantum affine algebras and deformations of W-algebras • Mathematics, Physics • 1998 VIEW 5 EXCERPTS HIGHLY INFLUENTIAL ## Factorization of the canonical bases for higher level Fock spaces • Mathematics • 2011 VIEW 1 EXCERPT ## Quantum toroidal algebras and their representations VIEW 1 EXCERPT
proofpile-shard-0030-128
{ "provenance": "003.jsonl.gz:129" }
## Probability in the North East day #### 6 March 2019 University of Sheffield. Organizers: Nic Freeman and Jonathan Jordan. These people attended the meeting. ## Programme 12:45–13:30 Lunch 13:30–14:20 Helena Stage (University of Manchester) Human behavioural patterns are fundamentally subject to inertia: the longer we have been participating in certain behaviours, lived in our current neighbourhoods, or even suffered under an addiction, the more difficult change becomes. This same concept has also been observed in the increasing political polarisation we have experienced in the past few years. We will discuss how renewal theory can be applied to these problems, with a focus on identifying strongly inertial states and the predictions associated with these. In particular, we will focus on empirically informed human migration or movement patterns, and recidivism models. That is, the likelihood of individuals to re-offend after a previous crime. A hallmark of strongly inertial systems is the lack of an equilibrium state, whereby their evolution is explicitly time dependent. These problems can be understood either through the lens of random walks or survival analysis. 14.20–15.10 Matthew Aldridge (University of Leeds) Suppose you wish to use a blood test to screen a group of people for a rare disease. You could take a blood sample from each person and test the samples individually. However, it can be more efficient to mix a number of samples together and test that mixture: if the test comes back negative then none of those people have the disease, while if the test is positive then at least one of them has the disease and further investigation is needed. This problem is called group testing: given n people of whom k have the disease, how many of these mixed tests do we need to find out which people are infected? In this talk, we discuss recent progress on this question, concentrating on nonadaptive testing, where the tests are all designed in advance, so they can be conducted in parallel. We will look at practical algorithms and compare their performance to information theoretic limits. 15:10–15:30 Tea and coffee 15:30–16:20 Sarah Penington (University of Bath) Consider a system of $N$ particles moving according to Brownian motions and branching at rate one. Each time a particle branches, the particle in the system furthest from the origin is killed. It turns out that we can use results about a related free boundary problem to control the long term behaviour of this particle system for large $N$. This is joint work with Julien Berestycki, Eric Brunet and James Nolen. 16:20–17:10 Henning Sulzbach (University of Birmingham) In the analysis of recursive algorithms and related random trees, fixed-point arguments involving the contraction method have proved fruitful over the last 30 years. I will speak about functional extensions of such results covering the analysis of classical tree-type data structures with nice geometric representations, extensions to random fields and connections to decompositions of random continuum real trees. The methods also allow to make statements on geometric properties of real trees including their fractal dimension and degrees. Examples include Aldous' CRT as well as dual trees of triangulations introduced by Curien and Le Gall.
proofpile-shard-0030-129
{ "provenance": "003.jsonl.gz:130" }
# What is a name for co-Sobczyk Banach spaces? Definition. Let us define a Banach space $$X$$ to be co-Sobczyk if every linear bounded operator $$T:Z\to c_0$$ defined on a separable subspace $$Z$$ of $$X$$ extends to a bounded operator $$\bar T:X\to c_0$$. By the classical Sobczyk Theorem, each separable Banach space is co-Sobczyk. But the class of co-Sobczyk spaces includes many non-separable Banach spaces. In particular, a Banach space $$X$$ is co-Sobczyk if each separable subspace of $$X$$ is contained in a complemented separable subspace. So, all classical Banach spaces $$c_0(\Gamma)$$ and $$\ell_p(\Gamma)$$ for $$1\le p<\infty$$, are co-Sobczyk for any set $$\Gamma$$. I have a strong feeling that co-Sobczyk spaces have been studied in the theory of non-separable Banach spaces, so asking the MO commubnity for a proper reference and an existing terminology (I suspect that co-Sobczyk spaces are called differently). Nigel Kalton studied a similar but stronger notion: for $$\lambda \geqslant 1$$, he termed a Banach space $$X$$ to have the $$(\lambda, \mathcal{C})$$-extension property, when for any compact space $$K$$ you may find extensions of operators $$T$$ from subspaces of $$X$$ into $$C(K)$$ to operators from $$X$$ to $$C(K)$$ with norm at most $$\lambda \|T\|$$. It is thus natural to term your spaces as having the $$(\lambda, c_0)$$-separable extension property if you care about the extension constant. Update: Correa and Tausk call this separable $$c_0$$-extension property. • Thank you for the answer. Has this term $(\lambda,c_0)$-separable extension property'' been used in any written paper? Because I thoughtalso about the name "$c_0$-coinjective". Jun 6, 2019 at 10:26 • @TarasBanakh, yes, up to a permutation. Here this is called separable $c_0$-extension property. sciencedirect.com/science/article/pii/S0022247X13002540 Jun 6, 2019 at 10:52
proofpile-shard-0030-130
{ "provenance": "003.jsonl.gz:131" }
# Dillion Most popular questions and responses by Dillion 1. ## physics A winch is used to drag a 375 N crate up a ramp at a constant speed of 75 cm/s by means of a rope that pulls parallel to the surface of the ramp. The rope slopes upward at 33 degreesabove the horizontal, and the coefficient of kinetic friction between the 2. ## Geometry Let A and B be two points on the hyperbola xy=1, and let C be the reflection of B through the origin. Let Gamma be the circumcircle of triangle ABC and let A' be the point on Gamma diametrically opposite A. Show that A' is also on the hyperbola xy=1. 3. ## math Given a hexagon $ABCDEF$ inscribed in a circle with $AB = BC, CD = DE, EF = FA$, show that $\overline{AD}, \overline{BE}$, and $\overline{CF}$ are concurrent. [asy] unitsize(2 cm); pair A, B, C, D, E, F, G; A = dir(85); B = dir(45); C = dir(5); D = 4. ## Algebra V is between 4.5 and 4.6 inclusive. 5. ## social studies What does methodology mean. http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=methodology 6. ## math If a = 12, b = 30, and c = 22, find the area of to the nearest tenth. 1. ## Geometry Nvrm I solved it. posted on November 17, 2019 2. ## Geometry How do we find the coordinates of A'? posted on November 17, 2019 3. ## math They aren't diameters tho. posted on October 6, 2019 4. ## science asia posted on February 24, 2014 5. ## statistics 242.081, 157.919 posted on February 22, 2013
proofpile-shard-0030-131
{ "provenance": "003.jsonl.gz:132" }
# 3.5.3.3.8 InvF ## Definition: The invf(value, m, n) function is the inverse F distribution function with m and n degrees of freedom. This function is used to calculate the p-value involving F distributions. ## Parameters: value (input, double) the value of F variate m (input, integer) the degrees of freedom of the numerator variance n (input, integer) the degrees of freedom of the denominator variance ## Example: invf(3.682, 2, 15) = 0.94999
proofpile-shard-0030-132
{ "provenance": "003.jsonl.gz:133" }
Here 'I' refers to the identity matrix. They can be 2x2, 3x3 or even 4x4 in regard of the number of columns and rows. AB ≠ BA A new matrix is obtained the following way: each [i, j] element of the new matrix gets the value of the [j, i] element of the original one. There are specific restrictions on the dimensions of matrices that can be multiplied. Also gain a basic understanding of matrices and matrix operations and explore many other free calculators. The calculator given in this section can be used to find inverse of a 2x2 matrix. ©2013-2020 DoMyHomework123.com All Rights Reserved, The transpose of a square matrix, one with an equal number of rows and columns, is the most common, The transpose of 3x3 matrix is a matrix A, The transpose a 2x2 matrix can be considered as a mirrored version of it. It sure has an algebraic interpretation but I do not know if that could be expressed in just a few words. We use cookies to enhance your browsing experience. My matrix algebra is the same that I learned long time ago and I really had to work hard to understand your way of accommodating the product to show that the Determinant of the result of a multiplication, escalar or matrix 1X1 is a 2X2 matrix. For instance, when we transpose an m × n matrix, the result would be an n × m matrix. \(\hspace{60px} A\hspace{130px}A^{\ast}\\ The multiplicative identity matrix is so important it is usually called the identity matrix, and is usually denoted by a double lined 1, or an I, no matter what size the identity matrix is. This calculator uses adjugate matrix to find the inverse, which is inefficient for large matrices, due to its recursion, but perfectly suits us here. To find a 2×2 determinant we use a simple formula that uses the entries of the 2×2 matrix. This calculator can instantly multiply two matrices and … The transpose of a square matrix, one with an equal number of rows and columns, is the most common The transpose of 3x3 matrix is a matrix A -1 such that A*A -1 and A -1 *A equal the identity matrix. Circular Matrix (Construct a matrix with numbers 1 to m*n in spiral way) Count frequency of k in a matrix of size n where matrix(i, j) = i+j; Check if it is possible to make the given matrix increasing matrix or not; Check if matrix can be converted to another matrix by transposing square sub-matrices Adjoint matrix is also referred as Adjunct matrix or Adjugate or classical adjoint matrix. 2x2 Matrix has two rows and two columns. Adjoint if a matrix. 2x2 Inverse Matrix Calculator to find the inverse of 2x2 matrix. Matrix calculations can be understood as a set of tools that involves the study of methods and procedures used for collecting, classifying, and analyzing data. In general, you can skip parentheses, but be very careful: e^3x is e^3x, and e^(3x) is e^(3x). It is called either E or I For example if you transpose a 'n' x 'm' size matrix you'll get a … Here 'I' refers to the identity matrix. - definition Definition: The adjoint of a matrix is the transpose of the cofactor matrix C of A, a d j (A) = C T Example: The adjoint of a 2X2 matrix A = ∣ ∣ ∣ ∣ ∣ ∣ 5 8 4 1 0 ∣ ∣ ∣ ∣ ∣ ∣ is a d j (A) = ∣ ∣ ∣ ∣ ∣ ∣ 1 0 − 8 − 4 5 ∣ ∣ ∣ ∣ ∣ ∣ It does not give only the inverse of a 2x2 matrix, and also it gives you the determinant and adjoint of the 2x2 matrix that you enter. Matrix Multiplication (2 x 2) and (2 x 3) __Multiplication of 2x2 and 2x3 matrices__ is possible and the result matrix is a 2x3 matrix. Transpose and Inverse. Unlike general multiplication, matrix multiplication is not commutative. Unlike general multiplication, matrix multiplication is not commutative. Matrix Inverse is denoted by A-1. In the matrix multiplication AB, the number of columns in matrix A must be equal to the number of rows in matrix B. Matrix multiplication is associative, analogous to simple algebraic multiplication. Matrix multiplication is NOT commutative in general The resulting element arc of the original matrix then becomes the element arc in the transposed matrix. Top-Class experts whose only goal is to give you the best experience give you the best....,, and the resultant matrix will have the same number of columns as well the calculated matrix... Are widely used in geometry, physics and computer graphics applications refers to the corresponding adjoint operator, which its... Is the transpose of a matrix that you can calculate the adjoint matrix matrix “ m ” said... Times the matrix multiplication is associative, analogous to simple algebraic multiplication model offered... Cofactors of the 2×2 matrix is used in complex multiplication in just few! Then B should have 2 rows and columns of a matrix when multiplied the. Of the inverse of a is the transpose of the number of rows in matrix a be! Matrix into columns and columns ; treated as a single element and manipulated to! Calculator, find the inverse matrix algebraic multiplication to simple algebraic multiplication find what. Numbers or fractions in this online calculator = A-1 a will have the number... Calculate the adjoint matrix, start by turning the first column, and on. Element arc of the original matrix matrix as it is used in complex multiplication “ m ” is to... Keyboard to move between field in calculator to find the calculator will find the inverse of matrix... Calculator - calculate matrix transpose calculator - calculate transpose of a 2x2 matrix calculator transpose step-by-step this website, consent. } A^ { \ast } \\ adjoint if a matrix by its transpose gives two square matrices restrictions on dimensions. Top 2 % experts across the board ensure you get the best.. Resulting product matrix will equal the original matrix should be properly referenced matrix is also as... Give you the best experience AB, the original matrix then becomes first! A invertible or nonsingular matrix 5 * x two matrices a and B column indices of a “! And … transpose and inverse two square matrices instantly multiply two matrices and transpose! I = a matrices are widely used in complex multiplication your problem anymore B a... N × m matrix which is its conjugate transpose //ilectureonline.com for more math and science lectures a! Uses cookies to ensure you get the best assignment help service there is is called of. Complex multiplication conjugate transpose property, I rather do a couple of examples find... Switching the row and column indices of a matrix is also called as a invertible nonsingular! Have the same number of rows in matrix B % experts across the board online help 2×2 determinant use! ' of a 4x4 matrix input values I ' refers to the number of rows as matrix and... Columns ; treated as a invertible or nonsingular matrix we do not know that... Row and column indices of a is a 2x2 matrix multiplication calculator is an involution ( self-inverse ) the! M matrix x a will give different results http: //ilectureonline.com for more math and science!... Simple formula that uses the entries of the given square matrix as it is given by the original matrix matrices! That could be expressed in just a few words ensure you get the best assignment service. As B this site, you consent to the number of rows in matrix B results! Service there is the rows and columns of a matrix when multiplied by original... Endorse any activities that violate applicable law or university/college policies is the transpose is same! = A-1 a and college homework assignments are not your problem anymore row of the 2×2 matrix *. Assignment help service there is activities that violate applicable law or university/college policies and Cookie Policy first row becomes second... A x B and B x a will give different results the resultant matrix will the. So on matrix LU decomposition calculator, find the inverse matrix calculator to find out what the pattern is you... Conjugate transpose of the cofactor matrix C of a matrix that you input! Of columns in matrix B physics and computer graphics applications but what does is mean ×... 3 columns as well adjoint operator, which is its conjugate transpose product matrix will have the number! Should have 2 rows and 3 columns as well input values will equal the original matrix then becomes the row... Called transpose of a 4x4 matrix input values multiplication sign, so 5x is equivalent ... Columns as B a matrices are most commonly employed in describing basic geometric transformations a! Called either E or I IA = AI = a A-1 = A-1.... After the tool this case, the result of this action is switching the row column! Move between field in calculator law or university/college policies times the matrix inverse transpose is the transpose the. In regard of the 2×2 matrix and B the operation of taking the transpose the! Or even 4x4 in regard of the cofactor matrix C of a 4x4 matrix inverse calculator to find what. So 5x is equivalent to 5 * x same for a square matrix as is! Continuing to use this site, you can calculate the adjoint matrix is also referred Adjunct. \Ast } \\ adjoint if a is the transpose of a is a matrix is a 2x3,! A team of top-class experts whose only goal is to give you the best experience calculated cofactor.! Adjunct matrix or Adjugate or classical adjoint matrix, with top 2 % experts across the board the cofactor C! Single element and manipulated according to rules B x a will give different results free calculators as! Be expressed in just a few words: //ilectureonline.com for more math and science lectures matrix decomposition! Resultant matrix will equal the original matrix yields the identity matrix a square matrix as it used! As it is used in geometry, physics and computer graphics applications triangular matrix by factorization.. Of cookies given square matrix as it is a matrix that you can the... We do not know if that could be expressed in just a few words the matrix gives symmetric. In the matrix into row is called either E or I IA = =. Corresponding adjoint operator, which is its conjugate transpose many other free.... Into columns and columns ; treated as a single element and manipulated according to rules geometric... A single element and manipulated according to rules gain a basic understanding of matrices that can be,. × m matrix is called transpose of the inverse of 2x2 matrix like it is in! Instance if a matrix by its transpose gives two square matrices operator, which is its conjugate transpose operation! Ba 2 below is a 2x2 matrix like transpose of a 2x2 matrix calculator is for a square matrix, then should. Examples to find out more about our privacy terms and Cookie Policy as a single element and according! Online tool programmed to perform multiplication operation between the two matrices and matrix operations explore... Commutative in general AB ≠ BA 2 or classical adjoint matrix is referred. Matrix input values, then B should have 2 rows and 3 columns as well non-square matrix DoMyHomework123.com should …... ( self-inverse ) B x a will give different results can use our and. By another matrix and the same number of columns and columns ; treated as a invertible or matrix. The adjoint matrix is often referenced, but what does is mean general, agree... What does is mean complex multiplication 2×2 determinant we use a simple formula that uses the entries the. Be equal to the use of cookies is a 2x3 matrix, by the... Calculator can instantly multiply two matrices and … transpose and inverse even 4x4 regard! Column and so on dimensions of matrices that can be multiplied B and B x will. Equivalent to 5 * x and inverse website, you agree to Cookie. Properly referenced does is mean B and B as well by the property I!, and keys on keyboard to move between field in calculator please view of. Will find the inverse matrix is also called as a invertible or nonsingular matrix be expressed just! Analogous to simple algebraic multiplication also referred as Adjunct matrix or Adjugate or classical adjoint matrix, the result this... Physics and computer graphics applications involution ( self-inverse ) x a will different. Array of quantities or expressions set out by rows and columns ; treated as invertible... Matrix C of a matrix refers to the corresponding adjoint operator, which is its conjugate.. Two square matrices help service there is top-class experts whose only goal is to you! And keys on keyboard to move between field in calculator matrices are most commonly employed describing... This website, you agree to our Cookie Policy, please view terms of use by continuing use! Sign, so 5x is equivalent to 5 * x ` a... And Cookie Policy AI = a A-1 = A-1 a forget about deadlines, with top 2 % experts the. Indices of a, ⁡ = and columns ; treated as a invertible nonsingular. Transposed matrix field in calculator if the rows and columns of a 4x4 inverse. By turning the first row becomes the first column of its transpose transpose of a 2x2 matrix calculator multiplication... By factorization Definition perform multiplication operation between the two matrices a and the resultant matrix will the! Expressions set out by rows and columns ; treated as a single element and manipulated according rules... Papers offered by DoMyHomework123.com should be … 4x4 matrix inverse transpose is an involution ( self-inverse ) give. Gives two square matrices inverse matrix visit http: //ilectureonline.com for more math and science!.
proofpile-shard-0030-133
{ "provenance": "003.jsonl.gz:134" }
On a “minimal” Cevian triangle As is well-known, a Cevian triangle is the triangle formed by the intersections of a triangle's sides with their corresponding Cevians (a line through a point inside the triangle, i.e. the Cevian point, and the vertex opposite the side). An application I'm considering requires me to reckon the Cevian triangle of smallest area with respect to the centroid of the triangle that is similar to the original triangle. For the equilateral case, the medial triangle is an "obvious" solution, but is the medial triangle the Cevian triangle with respect to the centroid that is similar and has the smallest area, or are there more "optimal" triangles? I've tried searching around, but I am probably not using the right keywords. Any help would be lovely. - If you know the Cevian point, I would have though that there was only one Cevian triangle, and for the centroid it is the medial triangle (which, unlike most Cevian triangles, is similar to the original triangle). So what is the question? –  Henry Apr 11 '11 at 7:41 You can have Cevian triangles which are similar to the original triangle but smaller than the medial triangle, though not related to the centroid. Take the triangle $(0,0)$, $(1,0)$, $(0,1)$: the centroid is about $(0.333,0.333)$ and the medial triangle $(0.5,0)$, $(0,0.5)$, $(0.5,0.5)$ has area $0.125$. However with a Cevian point of about $(0.179509025, 0.30437923)$, the Cevian triangle $(0.258055872,0)$, $(0,0.370972064)$, $(0.370972064,0.629027936)$ [all roots of cubic equations] is similar but has area $0.102106553$. I doubt there are more than 6 similar Cevian triangles. –  Henry Apr 11 '11 at 13:55 The medial triangle is the [internal] Cevian triangle of largest area. To see this, use an affine transform to map the given triangle into the triangle (0,0), (0,1), (1,0). Cevians through $(x,y)$ then yield the points $(\frac{x}{x+y},\frac{y}{x+y}), (\frac{x}{1-y},0), (0,\frac{y}{1-x})$, giving an area of $\frac{1}{2}(\frac{xy}{(1-y)(x+y)} + \frac{xy}{(1-x)(x+y)} - \frac{xy}{(1-x)(1-y)})$ = $\frac{xy}{2}\frac{(1-x)+(1-y)-(x+y)}{(1-x)(1-y)(x+y)}$ = $\frac{1}{2}\frac{xy(2-2x-2y)}{(1-x)(1-y)(x+y)}$. Defining a=1-x, b=1-y, c=x+y, we can rewrite this as $\frac{1-a}{a}\frac{1-b}{b}\frac{1-c}{c}$ subject to $a+b+c=2$. (Perhaps there is an easier way to get to this point, since these three fractions are exactly the ratios of the segment lengths in the three Cevians — a nice result in itself.) Maximizing the log, we want to maximize $\sum log(\frac{1-a_i}{a_i})$ given that $\sum a_i$ is fixed. If the $a_i$ are all above 1/2, then Jensen's inequality gives us what we want, since $log(\frac{1-a}{a})$ is concave above 1/2. If any $a_i$ is below 1/2, there must be another $a_j$ which is above 1-$a_i$ (even above 1-$a_i$/2), so raising $a_i$ and lowering $a_j$ will increase $\sum log(\frac{1-a_i}{a_i})$ in that case as well, since the derivative of $log(\frac{1-a}{a})$ is more extreme the farther $a$ is from 1/2. Therefore the maximum area is attained at a=b=c=2/3, which is the medial triangle (which was also the medial triangle before the affine transformation). There are at most 6 Cevian triangles $\triangle P_A P_B P_C$ (through Cevian point $P$ ) similar to $\triangle ABC$, as Henry points out in the comment above. This is because once you pick the shape (angles) of $\triangle P_A P_B P_C$, then there is only a single point $P$  yielding a triangle of that shape (ugly boring proof omitted), and there are 6 ways $\triangle P_A P_B P_C$ could be similar to $\triangle A B C$ (six ways to map the three points of one onto the three points of the other). If there are fewer than 6 ways, it is due to symmetry of the original triangle: there are 3 ways for an isosceles triangle and 1 for an equilateral triangle. Conclusion: If your "application" requires a specific one of these 6 possible similarities (presumably the $\triangle P_A P_B P_C \sim \triangle ABC$ one), then there is only one solution (the medial triangle). Otherwise, if solutions such as in Henry's comment above are acceptable, they will always yield smaller triangles than the medial triangle. Since there are only 5 such non-medial triangles to try, and no clean way for a formula to pick one of them, your best bet is to just try them all. -
proofpile-shard-0030-134
{ "provenance": "003.jsonl.gz:135" }
# Samuel, head of household with two dependents, has 2015 wages of $26,000, paid alimony of$3,000,... Samuel, head of household with two dependents, has 2015 wages of $26,000, paid alimony of$3,000, has taxable interest income of $2,000, and a$12,000 0%/15%/20% net long-term capital gain. Samuel uses the standard deduction and is age 38. What is his 2015 taxable income and the tax on the taxable income?
proofpile-shard-0030-135
{ "provenance": "003.jsonl.gz:136" }
## LaTeX forum ⇒ Curricula Vitae / Résumés ⇒ Illegal parameter number in definition of . to be read again ModernCV, Friggeri, Plasmati, Classicthesis-CV, and more Renthal Posts: 2 Joined: Thu Aug 11, 2016 5:25 pm ### Illegal parameter number in definition of . to be read again ! Illegal parameter number in definition of \blx@defformat@d.<to be read again>3 \ifblank{#3! Illegal parameter number in definition of \blx@defformat@d.<to be read again>3 \ifblank{#3}{}{#3 Condiering that I have a fresh up-to-date installation (I made sure to install latest MikTek version 2.9.6022 and just to be sure, I installed last version of ALL packages) I am quite confident the problem relies in the class definition. The problem clearly is located in the bibliography because if I remove it everything works fine. However, biber compiles successfully is Xelatex that fails. Specifically as I understand ( I am no latex expert ) there is a problem with the most recent update of Bibtek (see http://www.texdev.net/2016/03/13/biblatex-a-new-syntax-for-declarenameformat/) but I am not capable of correcting it myself. Tags: Johannes_B Site Moderator Posts: 4044 Joined: Thu Nov 01, 2012 4:08 pm Welcome, the issue is known, viewtopic.php?f=62&t=27239, but unfortunately nobody has updated the template yet. The bibliography section was outdated a while, and finally broke completely. Getting everything up to date is quite a big amount of work. Sorry for the inconvenience. Workaround: Rename the class file to be friggeri-<yourName>.cls and remove all parts that belong to the bibliography. Then load package biblatex as you would do in a usual document. It will look a tiny bit different, but it will work. The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary. Renthal Posts: 2 Joined: Thu Aug 11, 2016 5:25 pm Thank you for the quick answer, I saw the other topic but the "solution" provided there is not working. I am now trying to do what you suggested for a workaround. Removoing everything related to biblio was easy, but now I do not know how to make the bliob fit there (under publications) without creating a new chapter and messing with the template. I am using: \nocite{*}\bibliography{bibliography}\bibliographystyle{alpha} But i get: ! Argument of \@sectioncolor has an extra }.<inserted text>\par \begin{thebibliography}{FCF{\etalchar{+}}11i}! Paragraph ended before \@sectioncolor was complete.<to be read again>\par \begin{thebibliography}{FCF{\etalchar{+}}11i} I don't care too much about the look, is there any simple solution ot make it work? Johannes_B Site Moderator Posts: 4044 Joined: Thu Nov 01, 2012 4:08 pm Now you would be using the older BibTeX system instead of the more modern (and changing) biblatex system, which shouldn't be a problem. But it is Honestly, the template does some crazy stuff. I will have to take a look at this tomorrow. EDIT: If you have time till next week, i can apply a proper fix over the weekend and push it to latextemplates.com. The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary. Pierrick Posts: 1 Joined: Wed Oct 12, 2016 10:21 am Hi everyone! Does anybody already fix the problem ? I am using the template and did an update of MiKTeX yesterday and my CV doesn't compile anymore although I don't us (as far as I know) the bibliography. I tried to comment out everything that is related to the bibliography but then it didn't work neither. Johannes_B Site Moderator Posts: 4044 Joined: Thu Nov 01, 2012 4:08 pm The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary.
proofpile-shard-0030-136
{ "provenance": "003.jsonl.gz:137" }
Two-Connected Spanning Subgraphs with at Most $\frac{10}{7}{OPT}$ Edges @article{Heeger2017TwoConnectedSS, title={Two-Connected Spanning Subgraphs with at Most \$\frac\{10\}\{7\}\{OPT\}\$ Edges}, author={Klaus Heeger and Jens Vygen}, journal={SIAM J. Discret. Math.}, year={2017}, volume={31}, pages={1820-1835} } • Published 1 September 2016 • Mathematics, Computer Science • SIAM J. Discret. Math. We present a $\frac{10}{7}$-approximation algorithm for the minimum two-vertex-connected spanning subgraph problem. 3 Citations Figures and Topics from this paper A 4/3-Approximation Algorithm for the Minimum 2-Edge Connected Subgraph Problem • Computer Science ACM Trans. Algorithms • 2019 A factor 4/3 approximation algorithm for the problem of finding a minimum 2-edge connected spanning subgraph of a given undirected multigraph, based upon a reduction to a restricted class of graphs. An Upper Bound of 7n/6 for the Minimum Size 2EC on Cubic 3-Edge Connected Graphs It is shown that every 3-edge connected cubic graph G=(V, E), with n=|V| allows a 2EC solution for G of size at most 7n/6, which improves upon Boyd, Iwata and Takazawa's guarantee of 6n/5. Beating the Integrality Ratio for s-t-Tours in Graphs • Mathematics, Computer Science 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS) • 2018 This paper devise a polynomial-time algorithm for the s-t-path graph TSP with approximation ratio 1.497 and introduces several completely new techniques, including a new type of ear-decomposition, an enhanced ear induction that reveals a novel connection to matroid union, a stronger lower bound, and a reduction of general instances to instances in which s and t have small distance. References SHOWING 1-10 OF 19 REFERENCES Improving on the 1.5-Approximation of a Smallest 2-Edge Connected Spanning Subgraph • Computer Science, Mathematics SIAM J. Discret. Math. • 2001 We give a $\frac{17}{12}$-approximation algorithm for the following NP-hard problem: Given a simple undirected graph, find a 2-edge connected spanning subgraph that has the minimum number of edges. Approximation Algorithms for the Minimum Cardinality Two-Connected Spanning Subgraph Problem • Mathematics, Computer Science IPCO • 2005 The minimum cardinality 2-connected spanning subgraph problem is considered. An approximation algorithm with a performance ratio of 9/7 ≈ 1.286 is presented. This improves the previous best ratio of A 5/4-approximation algorithm for minimum 2-edge-connectivity • Mathematics, Computer Science SODA '03 • 2003 A 5/4-approximation algorithm is presented for the minimum cardinality 2-edge-connected spanning subgraph problem in undirected graphs and is shown that the ratio is tight with respect to current lower bounds, and any further improvement is possible only if new lower bounds are discovered. Improved approximation algorithms for biconnected subgraphs via better lower bounding techniques • Mathematics, Computer Science SODA '93 • 1993 Better techniques to lower bound the size of the minimum subgraphs are provided, which allows us to achieve approximation factors of a and \$ respectively, thereby improving on existing algorithms that achieve NP-hard results. Improving biconnectivity approximation via local optimization • Mathematics, Computer Science SODA '96 • 1996 This paper presents a new technique which can be used to further improve parallel approximation factors to 5/3 + {epsilon}, and reveals an algorithm with a factor of {alpha} + 1/5, where a is the approximation factor of any 2-edge connectivity approximation algorithm. Approximating minimum-size k-connected spanning subgraphs via matching • Mathematics, Computer Science Proceedings of 37th Conference on Foundations of Computer Science • 1996 An efficient heuristic is presented for the problem of finding a minimum-size k-connected spanning subgraph of a given graph G=(V,E), which is simple, deterministic, and runs in time O(k|E|/sup 2/). Shorter tours by nicer ears: 7/5-approximation for the graph-TSP, 3/2 for the path version, and 4/3 for two-edge-connected subgraphs • Computer Science ArXiv • 2012 The key new ingredient of all algorithms is a special kind of ear-decomposition optimized using forest representations of hypergraphs that provides the lower bounds that are used to deduce the approximation ratios. On approximability of the minimum-cost k-connected spanning subgraph problem • Mathematics, Computer Science SODA '99 • 1999 This work provides the first proof that Vfnhing a PTAS for the k-vertex-connectivitv nroblem in unweiehted graphs is NP-hard even for k = 2 aid for graphs of bocnded degree, and shows that the algorithmic results for Euclidean graphs cannot be extended to arbitrarily high dimensions. Problems in graph connectivity • Mathematics • 2006 We consider optimization problems where the main constraint is connectivity. Finding minimum-cost subgraphs with connectivity requirements is a fundamental problem in network optimization. This Biconnectivity approximations and graph carvings • Mathematics, Computer Science STOC '92 • 1992 This work considers the problem of finding an approximation to the smallest 2-connected subgraph, by an efficient algorithm, and shows that an approximation factor of 2 is possible in polynomial time for finding a k-edge connected spanning subgraph.
proofpile-shard-0030-137
{ "provenance": "003.jsonl.gz:138" }
[top] [TitleIndex] [WordIndex] Sal Baig 2010-12-03 Title The Average Analytic Rank in a Family of Quadratic Twists of an Elliptic Curve over Abstract The -function of a non-isotrivial elliptic curve over a function field of positive characteristic is known to be a polynomial with integer coefficients. Its analytic rank can be thus computed exactly, providing data to test the validity of Goldfeld's conjecture in the function field case. This conjecture claims that the average rank in a family of a quadratic twists of a fixed elliptic curve approaches 1/2 as the degree of the twisting polynomials increase. We (for the most part) show that this average rank is at least 1/2 as . 2013-05-11 18:34
proofpile-shard-0030-138
{ "provenance": "003.jsonl.gz:139" }
## anonymous 4 years ago What is the orbital radius and the speed of o sycronous satellite which orbits the earth once evry 24h? Take G=6.67x10^11Nm^2/kg^2, Mass of the earth is 5.98x10^24kg 1. anonymous $T ^{2} = (4\pi ^{2}/GM) \times R ^{3}$$(24)^{2} = (4\times3,14^{2}\div(6,67\times10^{-11}\times5,98\times10^{24})) \times R ^{3}$$R \approx 17,9\times10^{4}m$$mV ^{2}/R = GMm/R ^{2}$$V = \sqrt{6,67\times10^{-11}\times5,98\times10^{24}/ 17,9\times10^{4}}$$V \approx 6,95\times10^{4}m/s$ ;) 2. anonymous T should be 86400 3. anonymous yeah T is "86400", i'm sorry for that, just plug in 86400 where i put "24"
proofpile-shard-0030-139
{ "provenance": "003.jsonl.gz:140" }
# Inverse Modulus Proof • October 18th 2008, 09:46 PM aaronrj Inverse Modulus Proof Show that an inverse of a modulo m does not exist if gcd(a, m) > 1. • October 18th 2008, 11:54 PM Moo Hello, Quote: Originally Posted by aaronrj Show that an inverse of a modulo m does not exist if gcd(a, m) > 1. b is an inverse of a modulo m if $ab \equiv 1 (\bmod m)$ This means we have : $ab=1+mk$ where k is an integer. i.e. $ab-mk=1$ By Bézout's identity, we know that : $\exists u,v \in \mathbb{Z} \text{ such that } au+bv=1 \Leftrightarrow \text{gcd}(a,b)=1$ it appears in the French wikipedia and I've been taught this, but I don't know where to find it in English (not in wikipedia nor in mathworld). If you want a proof, tell me. And you're done.
proofpile-shard-0030-140
{ "provenance": "003.jsonl.gz:141" }
# Set Theory Seminar - Ur Yaar (Set Theory Seminar - Ur Yaar Set Theory Seminar - Ur Yaar Set Theory Seminar - Ur Yaar Set Theory Seminar - Ur Yaar Set Theory Seminar - Ur Yaar )Set Theory Seminar - Ur Yaar Set Theory Seminar - Ur Yaar Title: The Modal Logic of Forcing (Part III) Abstract: Modal logic is used to study various modalities, i.e. various ways in which statements can be true, the most notable of which are the modalities of necessity and possibility. In set-theory, a natural interpretation is to consider a statement as necessary if it holds in any forcing extension of the world, and possible if it holds in some forcing extension. One can now ask what are the modal principles which captures this interpretation, or in other words - what is the "Modal Logic of Forcing"? We can also restrict ourselves only to a certain class of forcing notions, or to forcing over a specific universe, resulting in an abundance of questions to be resolved. We will begin with a short introduction to modal logic, and then present the tools developed by Joel Hamkins and Benedikt Loewe to answer these questions. We will present their answer to the original question, and then move to focus on the class of sigma-centered forcings, which I investigated in my Master's thesis. ## Date: Wed, 09/01/2019 - 14:00 to 15:30 Ross 73
proofpile-shard-0030-141
{ "provenance": "003.jsonl.gz:142" }
In [1]: import numpy import tqdm import wendy %pylab inline import matplotlib.animation as animation from matplotlib import cm from IPython.display import HTML import copy _SAVE_GIFS= False rcParams.update({'axes.labelsize': 17., 'font.size': 12., 'legend.fontsize': 17., 'xtick.labelsize':15., 'ytick.labelsize':15., 'text.usetex': _SAVE_GIFS, 'figure.figsize': [5,5], 'xtick.major.size' : 4, 'ytick.major.size' : 4, 'xtick.minor.size' : 2, 'ytick.minor.size' : 2, 'legend.numpoints':1}) numpy.random.seed(2) Populating the interactive namespace from numpy and matplotlib # The Gaia phase-space spiral¶ One of the amazing early discoveries in the Gaia DR2 data set is the Gaia phase-space spiral. This is a spiral feature in the vertical phase-space distribution function $f(z,v_z)$ first found by Antoja et al. (2018). In this example, we investigate how such a phase-space spiral can form from a simple perturbation to the Milky Way's disk. We also use it as an opportunity to showcase the support for arbitrary external forces and for different sorting algorithms in wendy's approximate N-body solution. A simple model for the phase-space spiral is that it results from the disk's equilibrium $f(z,v_z)$ being offset from $\langle v_z \rangle =0$ (Darling & Widrow 2019). As a highly simplified model for this, we initialize a self-gravitating $\mathrm{sech}^2$ disk and only treat a fraction $\alpha$ of that as self-gravitating masses, filling in the rest as a static, external potential. This simplification makes it straightforward to setup the equilibrium solution, because that is just that of a fully self-gravitating, $\mathrm{sech}^2$ disk. First we sample $N$ points from the disk: In [2]: # Initial disk N= 100000 # compute zh based on sigma and totmass totmass= 1. # Sigma above sigma= 1. zh= sigma**2./totmass # twopiG = 1. in our units tdyn= zh/sigma x= numpy.arctanh(2.*numpy.random.uniform(size=N)-1)*zh*2. v= numpy.random.normal(size=N)*sigma v-= numpy.mean(v) # stabilize m= totmass*numpy.ones_like(x)/N We then assign only $\alpha$ of that disk to be self-gravitating and define the external force: In [3]: alpha= 0.3 # "live" fraction # Adjust masses to only represent alpha of the mass m*= alpha # 1-alpha in the mass is then given by the external force sigma2= sigma**2. def iso_force(x,t): return -(1.-alpha)*sigma2*numpy.tanh(0.5*x/zh)/zh We then run the $N$-body simulation, using the approximate algorithm with an external force and using a fast quicksort implementation to calculate the $N$-body forces. For a simple external force implemented using numpy functions as we have done here, numba is used to compile C byte-code which is directly called in the underlying C code, allowing this external force to be very efficiently added to the system (don't worry, external forces that cannot be automatically compiled to C code are also supported, but they are slightly slower). In [4]: g= wendy.nbody(x,v,m,0.05*tdyn,nleap=10,approx=True,sort='quick',ext_force=iso_force) In [5]: nt= 1000 xt= numpy.empty((N,nt+1)) vt= numpy.empty((N,nt+1)) xt[:,0]= x vt[:,0]= v x_init= copy.copy(x) v_init= copy.copy(v) for ii in tqdm.trange(nt): tx,tv= next(g) xt[:,ii+1]= tx vt[:,ii+1]= tv 100%|██████████| 1000/1000 [01:44<00:00, 9.54it/s] We check that the original disk is indeed in equilibrium: In [6]: figsize(6,4) fig, ax= subplots() ii= 0 a= ax.hist(xt[:,ii],bins=31,histtype='step',lw=1.,color='k',range=[-8.,8.],weights=31./16./N*numpy.ones(N)) xs= numpy.linspace(-8.,8.,101) ax.plot(xs,totmass/4./zh/numpy.cosh(xs/2./zh)**2.,'b--',lw=2.,zorder=0) ax.set_xlim(-8.,8.) ax.set_ylim(10.**-3.,1.) ax.set_xlabel(r'$x$') ax.set_ylabel(r'$\rho(x)$') ax.set_yscale('log') ax.annotate(r'$t=0$',(0.95,0.95),xycoords='axes fraction', horizontalalignment='right',verticalalignment='top',size=18.) subsamp= 4 def animate(ii): ax.clear() a= ax.hist(xt[:,ii*subsamp],bins=31,histtype='step',lw=1.,color='k',range=[-8.,8.],weights=31./16./N*numpy.ones(N)) xs= numpy.linspace(-8.,8.,101) ax.plot(xs,totmass/4./zh/numpy.cosh(xs/2./zh)**2.,'b--',lw=2.,zorder=0) ax.set_xlim(-8.,8.) ax.set_ylim(10.**-3.,1.) ax.set_xlabel(r'$x$') ax.set_ylabel(r'$\rho(x)$') ax.set_yscale('log') ax.annotate(r'$t=%.0f$' % (ii*subsamp/20.), (0.95,0.95),xycoords='axes fraction', horizontalalignment='right',verticalalignment='top',size=18.) return a[2] anim = animation.FuncAnimation(fig,animate,#init_func=init_anim_frame, frames=nt//subsamp,interval=40,blit=True,repeat=True) # The following is necessary to just get the movie, and not an additional initial frame plt.close() out= HTML(anim.to_html5_video()) plt.close() out Out[6]: Indeed, the disk is in equilibrium! Next, we offset all of the initial velocities by $1\sigma$ and run the simulation again to study the effect of this perturbation. This time we use timsort, a version of Python's own sorting algorithm (typically sort='quick' is in fact the fastest method; wendy also supports sort='merge' for a mergesort, sort='qsort' for the C standard library's own sorting algorithm, and sort='parallel' for a parallel implementation of mergesort). In [7]: x= copy.copy(x_init) v= copy.copy(v_init)+sigma g= wendy.nbody(x,v,m,0.05*tdyn,nleap=10,approx=True,sort='tim',ext_force=iso_force) In [8]: nt= 1000 xt= numpy.empty((N,nt+1)) vt= numpy.empty((N,nt+1)) xt[:,0]= x vt[:,0]= v for ii in tqdm.trange(nt): tx,tv= next(g) xt[:,ii+1]= tx vt[:,ii+1]= tv 100%|██████████| 1000/1000 [01:41<00:00, 9.82it/s] Now we plot the evolution of the phase-space distribution of time, color-coding the points by their initial energy in the unperturbed gravitational field, and we see that a strong spiral quickly develops and winds up over time. The spiral develops, because the starts of different energies have different frequencies and they therefore orbit on different times. The frequency goes down with increasing energy (or the period goes up), so the result is a winding spiral pattern: In [9]: def init_anim_frame(): line1= plot([],[]) xlabel(r'$x$') ylabel(r'$v$') xlim(-7.99,7.99) ylim(-4.99,4.99) return (line1[0],) figsize(6,4) fig, ax= subplots() # Directly compute the initial energy from the known sech^2 disk potential c= v_init**2./2.+2.*sigma2*numpy.log(numpy.cosh(0.5*x_init/zh)) s= 5.*((c-numpy.amin(c))/(numpy.amax(c)-numpy.amin(c))*2.+1.) line= ax.scatter(x,v,c=c,s=s,edgecolors='None',cmap=cm.jet_r) txt= ax.annotate(r'$t=%.0f$' % (0.), (0.95,0.95),xycoords='axes fraction', horizontalalignment='right',verticalalignment='top',size=18.) subsamp= 4 def animate(ii): line.set_offsets(numpy.array([xt[:,ii*subsamp],vt[:,ii*subsamp]]).T) txt.set_text(r'$t=%.0f$' % (ii*subsamp/20.)) return (line,) anim = animation.FuncAnimation(fig,animate,init_func=init_anim_frame, frames=nt//subsamp,interval=40,blit=True,repeat=True) if _SAVE_GIFS: anim.save('phasespiral_phasespace.gif',writer='imagemagick',dpi=80) # The following is necessary to just get the movie, and not an additional initial frame plt.close() out= HTML(anim.to_html5_video()) plt.close() out Out[9]:
proofpile-shard-0030-142
{ "provenance": "003.jsonl.gz:143" }
Chapter 6 For patients who require orotracheal intubation, digital (tactile) intubation is an alternative airway technique. This procedure involves using the index and middle fingers as a guide to blindly place the endotracheal tube into the larynx. Digital tracheal intubation has been demonstrated to be a safe, simple, and rapid method.1 It should be considered as a secondary method of intubation when other methods prove difficult or impossible.1 It is particularly suited for prehospital and aeromedical use, where equipment and alternate intubation techniques are limited or unavailable. One study demonstrated an 88 percent success rate among paramedics who intubated with this technique.2 For this procedure, the only two significant anatomic structures that the intubator will encounter are the tongue and the epiglottis. The epiglottis is the cartilaginous structure that is located at the root of the tongue and serves as a valve over the superior aperture of the larynx during the act of swallowing.3 Digital orotracheal intubation is an ideal alternative technique for intubating the comatose or chemically paralyzed patient when other more conventional methods for intubation have failed. In particular, this procedure is useful when oral secretions or blood inhibit the direct visualization of the upper airway.1 Since this technique involves minimal movement of the head and neck, it may be a suitable method for intubating patients with known or suspected cervical spine injuries. Digital intubation may be a useful procedure for paramedics and aeromedical personnel in the out-of-hospital setting, when trapped patients require intubation but are not in a position for more conventional methods.2 It is an alternative technique for out-of-hospital intubation where other techniques and equipment are unavailable or limited. This procedure also has been performed successfully in intubating neonates.4 There are no absolute contraindications to digital intubation. The main danger of this procedure is to the health care worker performing the intubation, who is at risk for having his or her fingers bitten by the patient. This technique should not be performed on any patient who is awake or semiconscious. It should be performed only on patients who are paralyzed or unconscious. A relative contraindication would be performing this procedure on a patient with multiple fractured teeth that may abrade or cut the intubator’s fingers. • Endotracheal tubes, various sizes • Wire stylet, malleable (optional) • 10 mL syringe • Water-soluble lubricant or anesthetic jelly • Bag-valve device • Oxygen source and tubing • Gauze, 4×4 squares Endotracheal intubation in the Emergency Department is commonly performed on an emergent or urgent basis. If there is time, the risks, benefits, and complications of the procedure should be explained to the patient and/or the patient’s representative. The use of gloves, a bite block, and gauze over the teeth as guards are recommended when performing this procedure. The patient should be lying supine. If the patient has sustained a concerning mechanism of injury, the cervical spine should be immobilized. An ... Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access. Ok ## Subscription Options ### AccessEmergency Medicine Full Site: One-Year Subscription Connect to the full suite of AccessEmergency Medicine content and resources including advanced 8th edition chapters of Tintinalli’s, high-quality procedural videos and images, interactive board review, an integrated drug database, and more.
proofpile-shard-0030-143
{ "provenance": "003.jsonl.gz:144" }
# Fubini Study Metric and Einstein constant Hi all, it is well known that the complex projective space with the fubini study metric is Einstein, but what is the explicit value, i.e. for which $\mu$ does $Ric=\mu g$ hold? Moreover, I would like to know how to calculate the sectional cuvature explicitly, because I would like to calculate the number $\sqrt{\sum K_{ij}}$ explicitly for a given orthonormal basis. ($K_{ij}$ is the sectional curvature of the plane spanned by $e_i$ and $e_j$) - Isn't this available in many different places, including Griffiths-Harris and wikipedia? –  Deane Yang Feb 15 '12 at 11:58 $$\mu=2\cdot n+3$$ ($\mathbb C\mathrm P^n$ is isometric to the factor $\mathbb S^{2n+1}/\mathbb S^1$. You can use O'Nail's formula to calculate sectional curvature, it is $=4$ in complex directions and $=1$ in real directions.) –  Anton Petrunin Feb 15 '12 at 14:29 As suggested by Anton, you can use the O'Neill formulas in the Riemannian submersion $\mathbb C^{n+1}\to \mathbb{C} P^n$ that defines the Fubini-Study metric on $\mathbb C P^n$. This gives the following: suppose $X,Y$ are orthonormal tangent vectors at some point in $\mathbb C P^n$, and denote by $\overline X,\overline Y$ their horizontal lifts to $\mathbb C^{n+1}$ (which are also orthonormal). Then $$sec(X,Y)=1+\tfrac34\|[\overline X,\overline Y]^v\|^2=1+3|\overline g(\overline Y,J\overline X)|^2,$$ where $\overline g$ is the canonical Euclidean metric on $\mathbb C^{n+1}$, $()^v$ denotes the vertical component wrt the submersion and $J$ is the complex structure, i.e., multiplication by $\sqrt{-1}$. Note that this immediately implies that $\mathbb CP^n$ is $\tfrac14$-pinched. With the above formula, you can easily compute the Einstein constant of $\mathbb C P^n$ to be equal to $\mu=2n+2$, see e.g. Petersen's book "Riemannian Geometry", chapter 3. Another possible way of doing it is using that this is a Kahler manifold. The Fubini-Study metric can be thought of as $\omega_{FS}=\sqrt{-1}\partial\overline\partial\log\|z\|^2$, where $\|z\|^2$ is the square norm of a local non vanishing holomorphic section (it is independent of the choice of section by the $\partial\overline\partial$-lemma). You can then compute in local normal (holomorphic) coordinates the coefficients $g_{i\bar j}$ and use that the Ricci form is given by $Ric(\omega)=-\sqrt{-1}\partial\overline\partial\log\det(g_{i\bar{j}})$. This will obviously give you the same result, but in the form $Ric(\omega_{FS})=(n+1)\omega_{FS}$. As pointed out in the comments below, the reason for the missing factor $2$ in this computation is that we have to change from real orthonormal frames to complex unitary frames. Your last sentence is not correct, the missing factor of $2$ come up when changing from real orthonormal frames to complex unitary frames. –  YangMills Feb 15 '12 at 15:30 typo: the metric should be $g_{i\bar{j}}$ –  John B Feb 16 '12 at 17:43
proofpile-shard-0030-144
{ "provenance": "003.jsonl.gz:145" }
# Typesetting a large matrix in LaTeX I have a 3x12 matrix I'd like to input into my LaTeX (with amsmath) document but LaTeX seems to choke when the matrix gets larger than 3x10: $$\textbf{e} = \begin{bmatrix} 1&1&1&1&0&0&0&0&-1&-1&-1&-1\\ 1&-1&0&0&1&1&-1&-1&0&0&1&-1\\ 0&0&1&-1&1&-1&1&-1&1&-1&0&0 \end{bmatrix}$$ The error: Extra alignment tab has been changed to \cr. tells me that I have more & than the bmatrix environment can handle. Is there a proper way to handle this? It also seems that the alignment for 1's and the -1's are different, is that also expected of the bmatrix? From the amsmath documentation (texdoc amsmath): The amsmath package provides some environments for matrices beyond the basic array environment of LATEX. The pmatrix, bmatrix, Bmatrix, vmatrix and Vmatrix have (respectively) ( ), [ ], { }, | |, and ∥ ∥ delimiters built in. For naming consistency there is a matrix environment sans delimiters. This is not entirely redundant with the array environment; the matrix environments all use more economical horizontal spacing than the rather prodigal spacing of the array environment. Also, unlike the array environment, you don’t have to give column specifications for any of the matrix environments; by default you can have up to 10 centered columns. (If you need left or right alignment in a column or other special formats you must resort to array.) i.e. bmatrix defaults to a 10 column maximum. More precisely: The maximum number of columns in a matrix is determined by the counter MaxMatrixCols (normal value = 10), which you can change if necessary using LATEX’s \setcounter or \addtocounter commands. • Wonderful! This was exactly what I was looking for, I didn't realize one could change the column maximum. As for the right-alignment, I've since found a nice workaround that still allows the bmatrix command - I'll post it in my own solution. – Hooked May 7 '10 at 15:51 • I had exactly the same problem, good question! I was computing character tables in representation theory and even with quite small groups you end up easily with large matrices. Thanks for posting/answering this question! – Patrick Da Silva Mar 25 '12 at 18:13 The answer by Scott is correct, but I've since learned you can override the alignment. Taken from http://texblog.net/latex-archive/maths/matrix-align-left-right/ \makeatletter \renewcommand*\env@matrix[1][c]{\hskip -\arraycolsep \let\@ifnextchar\new@ifnextchar \array{*\c@MaxMatrixCols #1}} \makeatother Now allows the command: \begin{bmatrix}[r] .... to have right-alignment!
proofpile-shard-0030-145
{ "provenance": "003.jsonl.gz:146" }
## Intro I read an article recently by Jay Kreps about a feature for delivering messages ‘exactly-once’ within the Kafka framework. Everyone’s excited, and for good reason. But there’s been a bit of a side story about what exactly ‘exactly-once’ means, and what Kafka can actually do. In the article, Jay identifies the safety and liveness properties of atomic broadcast as a pretty good definition for the set of properties that Kafka is going after with their new exactly-once feature, and then starts to address claims by naysayers that atomic broadcast is impossible. For this note, I’m not going to address whether or not exactly-once is an implementation of atomic broadcast. I also believe that exactly-once is a powerful feature that’s been impressively realised by Confluent and the Kafka community; nothing here is a criticism of that effort or the feature itself. But the article makes some claims about impossibility that are, at best, a bit shaky - and, well, impossibility’s kind of my jam. Jay posted his article with a tweet saying he couldn’t ‘resist a good argument’. I’m responding in that spirit. In particular, the article makes the claim that atomic broadcast is ‘solvable’ (and later that consensus is as well…), which is wrong. What follows is why, and why that matters. I have since left the pub. So let’s begin. ## Make any algorithm lock-free with this one crazy trick Lock-free algorithms often operate by having several versions of a data structure in use at one time. The general pattern is that you can prepare an update to a data structure, and then use a machine primitive to atomically install the update by changing a pointer. This means that all subsequent readers will follow the pointer to its new location - for example, to a new node in a linked-list - but this pattern can’t do anything about readers that have already followed the old pointer value, and are traversing the previous version of the data structure. ## Distributed systems theory for the distributed systems engineer Updated June 2018 with content on atomic broadcast, gossip, chain replication and more Gwen Shapira, who at the time was an engineer at Cloudera and now is spreading the Kafka gospel, asked a question on Twitter that got me thinking. My response of old might have been “well, here’s the FLP paper, and here’s the Paxos paper, and here’s the Byzantine generals paper…”, and I’d have prescribed a laundry list of primary source material which would have taken at least six months to get through if you rushed. But I’ve come to thinking that recommending a ton of theoretical papers is often precisely the wrong way to go about learning distributed systems theory (unless you are in a PhD program). Papers are usually deep, usually complex, and require both serious study, and usually significant experience to glean their important contributions and to place them in context. What good is requiring that level of expertise of engineers? And yet, unfortunately, there’s a paucity of good ‘bridge’ material that summarises, distills and contextualises the important results and ideas in distributed systems theory; particularly material that does so without condescending. Considering that gap lead me to another interesting question: What distributed systems theory should a distributed systems engineer know? A little theory is, in this case, not such a dangerous thing. So I tried to come up with a list of what I consider the basic concepts that are applicable to my every-day job as a distributed systems engineer. Let me know what you think I missed! ## The Elephant was a Trojan Horse: On the Death of Map-Reduce at Google Note: this is a personal blog post, and doesn’t reflect the views of my employers at Cloudera Map-Reduce is on its way out. But we shouldn’t measure its importance in the number of bytes it crunches, but the fundamental shift in data processing architectures it helped popularise. This morning, at their I/O Conference, Google revealed that they’re not using Map-Reduce to process data internally at all any more. We shouldn’t be surprised. The writing has been on the wall for Map-Reduce for some time. The truth is that Map-Reduce as a processing paradigm continues to be severely restrictive, and is no more than a subset of richer processing systems. ## Paper notes: MemC3, a better Memcached MemC3: Compact and Concurrent MemCache with Dumber Caching and Smarter Hashing Fan and Andersen, NSDI 2013 The big idea: This is a paper about choosing your data structures and algorithms carefully. By paying careful attention to the workload and functional requirements, the authors reimplement memcached to achieve a) better concurrency and b) better space efficiency. Specifically, they introduce a variant of cuckoo hashing that is highly amenable to concurrent workloads, and integrate the venerable CLOCK cache eviction algorithm with the hash table for space-efficient approximate LRU. [Read More] ## Paper notes: Anti-Caching Anti-Caching: A New Approach to Database Management System Architecture DeBrabant et. al., VLDB 2013 The big idea: Traditional databases typically rely on the OS page cache to bring hot tuples into memory and keep them there. This suffers from a number of problems: No control over granularity of caching or eviction (so keeping a tuple in memory might keep all the tuples in its page as well, even though there’s not necessarily a usage correlation between them) No control over when fetches are performed (fetches are typically slow, and transactions may hold onto locks or latches while the access is being made) Duplication of resources - tuples can occupy both disk blocks and memory pages. [Read More] ## Paper notes: Stream Processing at Google with Millwheel ### MillWheel: Fault-Tolerant Stream Processing at Internet Scale Akidau et. al., VLDB 2013 The big idea: Streaming computations at scale are nothing new. Millwheel is a standard DAG stream processor, but one that runs at ‘Google’ scale. This paper really answers the following questions: what guarantees should be made about delivery and fault-tolerance to support most common use cases cheaply? What optimisations become available if you choose these guarantees carefully? TLDR: Yesterday I mentioned on Twitter that I’d found a bad performance problem when writing to a large ByteArrayOutputStream in Java. After some digging, it appears to be the case that there’s a bad bug in JDK6 that doesn’t affect correctness, but does cause performance to nosedive when a ByteArrayOutputStream gets large. This post explains why.
proofpile-shard-0030-146
{ "provenance": "003.jsonl.gz:147" }
Home » Writing » Writing a null hypothesis for anova # Writing a null hypothesis for anova Let’s say we have two factors (A and B), each with two levels (A1, A2 and B1, B2) and a response variable (y). The when performing a two way ANOVA of the type: We are testing three null hypothesis: 1. There is no difference in the means of factor A 2. There is no difference in means of factor B 3. There is no interaction between factors A and B When written down, the first two hypothesis are easy to formulate (for 1 it is $H_0:\; \mu_=\mu_$) But how should hypothesis 3 be formulated? edit. and how would it be formulated for the case of more then two levels? I think it’s important to clearly separate the hypothesis and its corresponding test. For the following, I assume a balanced, between-subjects CRF-$pq$ design (equal cell sizes, Kirk’s notation: Completely Randomized Factorial design). $Y_$ is observation $i$ in treatment $j$ of factor $A$ and treatment $k$ of factor $B$ with $1 \leq i \leq n$, $1 \leq j \leq p$ and $1 \leq k \leq q$. The model is $Y_ = \mu_ + \epsilon_, \quad \epsilon_ \sim N(0, \sigma_^2)$ B 1 \ldots B k \ldots B q \\\hline A 1 \mu_ \ldots \mu_ \ldots \mu_ \mu_\\ \ldots \ldots \ldots \ldots \ldots \ldots \ldots\\ A j \mu_ \ldots \mu_ \ldots \mu_ \mu_\\ \ldots \ldots \ldots \ldots \ldots \ldots \ldots\\ A p \mu_ \ldots \mu_ \ldots \mu_ \mu_\\\hline \mu_ \ldots \mu_ \ldots \mu_ \mu \end\mu_$is the expected value in cell$jk$,$\epsilon_$is the error associated with the measurement of person$i$in that cell. The$()$notation indicates that the indices$jk$are fixed for any given person$i$because that person is observed in only one condition. A few definitions for the effects:$\mu_ = \frac \sum_^ \mu_$(average expected value for treatment$j$of factor$A$)$\mu_ = \frac \sum_^ \mu_$(average expected value for treatment$k$of factor$B$)$\alpha_ = \mu_ – \mu$(effect of treatment$j$of factor$A$,$\sum_^ \alpha_ = 0$)$\beta_ = \mu_ – \mu$(effect of treatment$k$of factor$B$,$\sum_^ \beta_ = 0$)$(\alpha \beta)_ = \mu_ – (\mu + \alpha_ + \beta_) = \mu_ – \mu_ – \mu_ + \mu$(interaction effect for the combination of treatment$j$of factor$A$with treatment$k$of factor$B$,$\sum_^ (\alpha \beta)_ = 0 \, \wedge \, \sum_^ (\alpha \beta)_ = 0)\alpha_^ = \mu_ – \mu_$(conditional main effect for treatment$j$of factor$A$within fixed treatment$k$of factor$B$,$\sum_^ \alpha_^ = 0 \, \wedge \, \frac \sum_^ \alpha_^ = \alpha_ \quad \forall \, j, k)\beta_^ = \mu_ – \mu_$(conditional main effect for treatment$k$of factor$B$within fixed treatment$j$of factor$A$,$\sum_^ \beta_^ = 0 \, \wedge \, \frac \sum_^ \beta_^ = \beta_ \quad \forall \, j, k)$With these definitions, the model can also be written as:$Y_ = \mu + \alpha_ + \beta_ + (\alpha \beta)_ + \epsilon_$This allows us to express the null hypothesis of no interaction in several equivalent ways:$H_>: \sum_\sum_ (\alpha \beta)^_ = 0$(all individual interaction terms are$0$, such that$\mu_ = \mu + \alpha_ + \beta_ \, \forall j, k$. This means that treatment effects of both factors – as defined above – are additive everywhere.)$H_>: \alpha_^ – \alpha_^ = 0 \quad \forall \, j \, \wedge \, \forall \, k, k’; \quad (k \neq k’;)$(all conditional main effects for any treatment$j$of factor$A$are the same, and therefore equal$\alpha_$. This is essentially Dason’s answer.)$H_>: \beta_^ – \beta_^ = 0 \quad \forall \, j, j’; \, \wedge \, \forall \, k \quad (j \neq j’;)$(all conditional main effects for any treatment$k$of factor$B$are the same, and therefore equal$\beta_$.)$H_>$: In a diagramm which shows the expected values$\mu_$with the levels of factor$A$on the$x$-axis and the levels of factor$B$drawn as separate lines, the$q$different lines are parallel. An interaction tells us that the levels of factor A have different effects based on what level of factor B you’re applying. So we can test this through a linear contrast. Let C = (A1B1 – A1B2) – (A2B1 – A2B2) where A1B1 stands for the mean of the group that received A1 and B1 and so on. So here we’re looking at A1B1 – A1B2 which is the effect that factor B is having when we’re applying A1. If there is no interaction this should be the same as the effect B is having when we apply A2: A2B1 – A2B2. If those are the same then their difference should be 0 so we could use the tests:$H_0: C = 0\quad\text\quad H_A: C \neq 0.\$ answered Dec 18 ’10 at 14:14 chl ♦ 37.4k ● 6 ● 124 ● 243 Thanks Dason, that helped. Also, after reading your reply, it suddenly became clear to me that I am not fully sure how this generalizes in case we are having more factors. Could you advise? Thanks again. Tal Tal Galili Dec 18 ’10 at 14:21 You can test multiple contrasts simultaneously. So for example if A had three levels and B had 2 we could use the two contrasts: C1 = (A1B1 – A2B1) – (A2B1 – A2B2) and C2 = (A2B1 – A2B2) – (A3B1 – A3B2) and use a 2 degree of freedom test to simultaneously test if C1 = C2 = 0. It’;s also interesting to note that C2 could equally have been (A1B1 – A1B2) – (A3B1 – A3B2) and we would come up with the same thing. Dason Dec 18 ’10 at 14:23
proofpile-shard-0030-147
{ "provenance": "003.jsonl.gz:148" }
Reminder: in case of any technical issues, you can use the lightweight website m1.codeforces.com, m2.codeforces.com, m3.codeforces.com. × ### upsolving's blog By upsolving, 8 years ago, , I am stuck with this backtracking problem. I think there is mistake in my recursive code (loops or recursive calls) but I could not make out what it is. I my pasting my code. Help from any one will be appreciated. VERSION 1: public class Obstacle { boolean[][] visited; int[] dx = {0, 1, 0, -1}; int[] dy = {1, 0, -1, 0}; //**************PROGRAM STARTS HERE********************** public int getLongestPath(int[] a){ visited = new boolean[5][5]; for(int i = 0; i < a.length; i++) visited[a[i] / 5][a[i] % 5] = true; doit(0, 0, 0, 0); //move down doit(0, 0, 1, 0); //move right } boolean isSafe(int x, int y){ if(x < 0 || x >= 5 || y < 0 || y >= 5 || visited[x][y]) return false; return true; } private void doit(int x, int y, int direction, int counts ){ if(!isSafe(x, y)) return; visited[x][y] = true; counts = counts + 1; //if safe to move in same direction move on if(isSafe(x + dx[direction], y + dy[direction])) doit(x + dx[direction], y + dy[direction], direction, counts); else{ //if moving direction is vertical before getting blocked then try moving left & right if(direction % 2 == 0){ doit(x + dx[1], y + dy[1], 1, counts); doit(x + dx[3], y + dy[3], 3, counts); }else{ //if moving direction is horizontal before getting blocked then try moving up & down doit(x + dx[1], y + dy[1], 0, counts); doit(x + dx[2], y + dy[2], 2, counts); } } //backtrack visited[x][y] = false; } } please do explain why my backtracking approach is not working. • -22 » 8 years ago, # |   0 mistake identified. Thanks for -22
proofpile-shard-0030-148
{ "provenance": "003.jsonl.gz:149" }
To overcome the tradeoff between $NOx$ and particulate emissions for future diesel vehicles and engines it is necessary to seek methods to lower pollutant emissions. The desired simultaneous improvement in fuel efficiency for future DI diesels is also a difficult challenge due to the combustion modifications that will be required to meet the exhaust emission mandates. This study demonstrates the emission reduction capability of EGR and other parameters on a high-speed direct-injection (HSDI) diesel engine equipped with a common rail injection system using an RSM optimization method. Engine testing was done at 1757 rev/min, 45% load. The variables used in the optimization process included injection pressure, boost pressure, injection timing, and EGR rate. RSM optimization led engine operating parameters to reach a low-temperature and premixed combustion regime called the MK combustion region, and resulted in simultaneous reductions in $NOx$ and particulate emissions without sacrificing fuel efficiency. It was shown that RSM optimization is an effective and powerful tool for realizing the full advantages of the combined effects of combustion control techniques by optimizing their parameters. It was also shown that through a close observation of optimization processes, a more thorough understanding of HSDI diesel combustion can be provided. 1. Walsh, M. P., 1998, “Global Trends in Diesel Emissions Control—A 1998 Update,” SAE Paper No. 980186. 2. Tanin, K. V., Wickman, D. D., Montgomery, D. T., Das, S., and Reitz, R. D., 1999, “The Influence of Boost Pressure on Emissions and Fuel Consumption of a Heavy-Duty Single-Cylinder D.I. Diesel Engine,” SAE Paper No. 1999-01-0840. 3. Montgomery, David T., 2000, “An Investigation into Optimization of Heavy-Duty Diesel Engine Operating Parameters When Using Multiple Injections and EGR,” Ph.D. thesis, University of Wisconsin-Madison. 4. Tow, T. C., Pierpont, A. and Reitz, R. D., 1994, “Reducing Particulates and NOx Emissions by Using Multiple Injections in a Heavy Duty D.I. Diesel Engine,” SAE Paper No. 940897. 5. Pierpont, D. A., Montgomery, D. T., and Reitz, R. D., 1995, “Reducing Particulate and NOx Using Multiple Injections and EGR in a D.I. Diesel,” SAE Paper 950217. 6. Montgomery, Douglas C., 1991, Design and Analysis of Experiments, John Wiley and Sons, New York. 7. Gardiner, W. P., and Gettinby, G., 1998, Experimental Design Techniques in Statistical Practice, Horwood Publishing. 8. Lorenzen, Thomas J., and Anderson, Virgil L., 1993, Design of Experiments: A No-Name Approach, Marcel Dekker, New York. 9. Box, G. E. P., Hunter, W. G., and Hunter, J. S., 1978, Statistics for Experimenters, John Wiley and Sons, New York. 10. Wickman, D. D., Senecal, O. K., and Reitz, R. D., 2001, “Diesel Engine Combustion Chamber Geometry Optimization Using Genetic Algorithms and Multi-Dimensional Spray and Combustion Modeling,” SAE Paper No. 2001-01-0547. 11. Kimura, Shuji, et al., 1999, “New Combustion Concept for Ultra-Clean and High-Efficiency Small DI Diesel Engines,” SAE Paper No. 1999-01-3681. 12. Heywood, J. B., 1988, Internal Combustion Engine Fundamentals, McGraw-Hill, New York. You do not currently have access to this content.
proofpile-shard-0030-149
{ "provenance": "003.jsonl.gz:150" }
Find the integral roots of the polynomial Question: Find the integral roots of the polynomial $f(x)=x^{3}+6 x^{2}+11 x+6$ Solution: Given, that $f(x)=x^{3}+6 x^{2}+11 x+6$ Clearly we can say that, the polynomial f(x) with an integer coefficient and the highest degree term coefficient which is known as leading factor is 1. So, the roots of f(x) are limited to integer factor of 6, they are ±1, ± 2, ± 3, ± 6 Let x = -1 $f(-1)=(-1)^{3}+6(-1)^{2}+11(-1)+6$ = -1 + 6 -11 + 6 = 0 Let x = – 2 $f(-2)=(-2)^{3}+6(-2)^{2}+11(-2)+6$ = – 8 – (6 * 4) – 22 + 6 = – 8 + 24 – 22 + 6 = 0 Let x = – 3 $f(-3)=(-3)^{3}+6(-3)^{2}+11(-3)+6$ = – 27 – (6 * 9) – 33 + 6 = – 27 + 54 – 33 + 6 = 0 But from all the given factors only -1, -2, -3 gives the result as zero. So, the integral multiples of $x^{3}+6 x^{2}+11 x+6$ are $-1,-2,-3$.
proofpile-shard-0030-150
{ "provenance": "003.jsonl.gz:151" }
# Science:Math Exam Resources/Courses/MATH101/April 2005/Question 08 (a) MATH101 April 2005 Other MATH101 Exams ### Question 08 (a) An unknown continuous function ${\displaystyle f(x)}$ satisfies ${\displaystyle f(0)=0}$, ${\displaystyle f(4)=8}$, and ${\displaystyle 2x\leq f(x)\leq 6x-x^{2}}$ for ${\displaystyle 0\leq x\leq 4.}$ Also, ${\displaystyle f(x)}$ is nondecreasing on this interval, i.e. it satisfies ${\displaystyle f(c)\leq f(d)}$ for all real numbers ${\displaystyle c}$ and ${\displaystyle d}$ with ${\displaystyle 0\leq c\leq d\leq 4.}$ Let ${\displaystyle I}$ be the value of definite integral ${\displaystyle \int _{0}^{4}f(x)\,dx.}$ (a) Let ${\displaystyle L_{100}}$ be the underestimate for ${\displaystyle I}$ obtained by using a Riemann sum with equal-length subintervals and ${\displaystyle x_{i}^{*}=x_{i-1}}$ (i.e. using the left endpoints of the subintervals), and ${\displaystyle R_{100}}$ be the overestimate obtained by using ${\displaystyle x_{i}^{*}=x_{i}}$ (i.e. the right endpoints). Compute a numerical value for ${\displaystyle R_{100}-L_{100}}$. Make sure you understand the problem fully: What is the question asking you to do? Are there specific conditions or constraints that you should take note of? How will you know if your answer is correct from your work only? Can you rephrase the question in your own words in a way that makes sense to you? If you are stuck, check the hint below. Consider it for a while. Does it give you a new idea on how to approach the problem? If so, try it! Checking a solution serves two purposes: helping you if, after having used the hint, you still are stuck on the problem; or if you have solved the problem and would like to check your work. If you are stuck on a problem: Read the solution slowly and as soon as you feel you could finish the problem on your own, hide it and work on the problem. Come back later to the solution if you are stuck or if you want to check your work. If you want to check your work: Don't only focus on the answer, problems are mostly marked for the work you do, make sure you understand all the steps that were required to complete the problem and see if you made mistakes or forgot some aspects. Your goal is to check that your mental process was correct, not only the result.
proofpile-shard-0030-151
{ "provenance": "003.jsonl.gz:152" }
# Orthonormal Set • Apr 30th 2011, 11:50 AM evant8950 Orthonormal Set I have this problem that says show that $\left \{ \frac12, sin(x), cos(x) \right \}$ is an orthonormal set relative to the inner product defined by $\frac1\pi \int_{-\pi}^{\pi}f(x)g(x)dx$. When I take the norms of each item in the set I get 1/2, 1, 1 respectively. Could someone point out to what I am doing wrong? Thanks • Apr 30th 2011, 12:33 PM Ackbeet Are you sure it's supposed to be an orthonormal set and not just an orthogonal set? I get what you get for the norms, so the set is definitely not orthonormal w.r.t. that inner product. Maybe they meant to have the set $\left\{\frac{\sqrt{2}}{2},\sin(x),\cos(x)\right\},$ which I believe is orthonormal w.r.t. that inner product. • Apr 30th 2011, 08:08 PM evant8950 I went ahead and made it orthonormal and I got the same set that you derived. I am not sure if my teacher made a mistake or he wanted us to make them orthonormal. Thanks for your help! • May 2nd 2011, 02:07 AM Ackbeet You're welcome!
proofpile-shard-0030-152
{ "provenance": "003.jsonl.gz:153" }
# Properties of singular value decomposition Every (real) $$m\times n$$ matrix $$A$$ of rank $$r$$ has an SVD $$A = U\Sigma V^T$$ • $$\text{Image}(A) = \text{span}\{u_1,\dots,u_r\}$$ • $$\text{Null space}(A) = \text{span}\{v_{r+1},\dots,v_n\}$$ Maybe I am lacking some knowledge from my linear algebra courses, but how can these properties be proven? Image(A) means column space of A. I will assume $$m\geq n$$ and rank(A) = r without loss of generality. To see these identity, distinguish compact and full svd decomposition: \begin{align} A = U_1\Sigma_1 V_1^T = \underbrace{[U_1,\ U_2]}_U\underbrace{\begin{bmatrix} \Sigma_1 & 0\\0 & 0\end{bmatrix}}_\Sigma\underbrace{\begin{bmatrix} V_1^T \\ V_2^T\end{bmatrix} }_V \end{align} where $$U \in R^{m\times m}$$, $$\Sigma = diag(\sigma_1,\ldots,\sigma_r)\in R^{r\times r}$$, and $$V \in R^{n\times n}$$. Matrix U constitutes an orthonormal basis for $$R^{m\times m}$$ with $$U^TU=UU^T = I_m.$$ In the same way, matrix V constitutes an orthonormal basis for $$R^{n\times n}$$ with $$V^TV=VV^T=I_n.$$ On the other hand, by compact SVD, you get only orthonormal basis for the range and domain space of matrix A. So, you do not have the property $$U_iU^T_i = I_m \ or \ V_iV^T_i = I_n, \ i = 1,2.$$ Knowing this distinction, come back to the your question. Consider following matrix transformation for $$\forall x\in R^n$$ \begin{align} Ax &= \sum_i^r \sigma_i(v_i^Tx)u_i \\&= U_1\Sigma_1V_1^Tx = U_1z\\ &=\sum_i^rz_iu_i \end{align} where $$z_i = \sigma_i(v_i^Tx)$$. So, Ax is just weighted linear combination of vectors $$\{u_1,\ldots,u_r\}$$ which equivalent to say $$\mathcal{R}(A) = span\{u_1,\ldots,u_r\}$$. On the other hand, null space is the vectors that are mapped to zero, i.e. if $$x\in \mathcal{N}(A)$$, then $$Ax = 0$$. Since the set $$\{u_1,\ldots,u_r\}$$ is orthonormal, $$$$Ax = U_1z = 0$$$$ can be possible only if z is zero vector. \begin{align} z = \Sigma_1V_1Tx = 0 \leftrightarrow V_1^Tx \end{align} $$\Sigma$$ is just nonzero diagonal matrix, we can ignore it. So, vector z is $$0$$ if and only if vector x is perpendicular to $$span\{v_1,\ldots,v_r\}$$ which means that $$x\in span\{v_{r+1},\ldots,v_n\} = \mathcal{R}(V_2)\rightarrow Ax = 0$$ One simple possibility is to use this form of SV decomposition of $$A$$: $$A = \sum_{i=1}^{r}{\lambda_i u_i v_i^T}$$ Then, for an input $$x = \sum_{i=1}^{n} x_iv_i$$ It follows $$Ax = \sum_{i=1}^{r}{\lambda_i x_i u_i}$$
proofpile-shard-0030-153
{ "provenance": "003.jsonl.gz:154" }
## Package remotes Devtools version 1.9 supports package dependency installation for packages not yet in a standard package repository such as CRAN or Bioconductor. You can mark any regular dependency defined in the Depends, Imports, Suggests or Enhances fields as being installed from a remote location by adding the remote location to Remotes in your DESCRIPTION file. This will cause devtools to download and install them prior to installing your package (so they won’t be installed from CRAN). The remote dependencies specified in Remotes should be described in the following form. Remotes: [type::]<Repository>, [type2::]<Repository2> The type is an optional parameter. If the type is missing the default is to install from GitHub. Additional remote dependencies should be separated by commas, just like normal dependencies elsewhere in the DESCRIPTION file. It is important to remember that you must always declare the dependency in the usual way, i.e. include it in Depends, Imports, Suggests or Enhances. The Remotes field only provides instructions on where to install the dependency from. In this example DESCRIPTION file, note how rlang appears in Imports and in Remotes: Package: xyz Title: What the Package Does (One Line, Title Case) Version: 0.0.0.9000 Authors@R: person(given = "First", family = "Last", role = c("aut", "cre"), email = "[email protected]") Description: What the package does (one paragraph). Imports: rlang Remotes: r-lib/rlang #### GitHub Because GitHub is the most commonly used unofficial package distribution in R, it’s the default: Remotes: hadley/testthat You can also specify a specific hash, tag, or pull request (using the same syntax as install_github() if you want a particular commit. Otherwise the latest commit on the HEAD of the branch is used. Remotes: hadley/[email protected], klutometis/roxygen#142, hadley/testthat@c67018fa4970 A type of github can be specified, but is not required Remotes: github::hadley/ggplot2 #### Other sources All of the currently supported install sources are available, see the ‘See Also’ section in ?install for a complete list. # GitLab Remotes: gitlab::jimhester/covr # Git Remotes: git::[email protected]:djnavarro/lsr.git # Bitbucket Remotes: bitbucket::sulab/mygene.r@default, djnavarro/lsr # Bioconductor Remotes: bioc::3.3/SummarizedExperiment#117513, bioc::release/Biobase # SVN Remotes: svn::https://github.com/tidyverse/stringr # URL Remotes: url::https://github.com/tidyverse/stringr/archive/main.zip # Local Remotes: local::/pkgs/testthat # Gitorious Remotes: gitorious::r-mpc-package/r-mpc-package #### CRAN submission When you submit your package to CRAN, all of its dependencies must also be available on CRAN. For this reason, release() will warn you if you try to release a package with a Remotes field.
proofpile-shard-0030-154
{ "provenance": "003.jsonl.gz:155" }
# O(2) still being written An elementary example of a Lie group is afforded by O(2), the orthogonal group in two dimensions. This is the set of transformations of the plane which fix the origin and preserve the distance between points. It may be shown that a transform has this property if and only if it is of the form $\begin{pmatrix}x\\ y\end{pmatrix}\mapsto M\begin{pmatrix}x\\ y\end{pmatrix},$ where $M$ is a $2\times 2$ matrix such that $M^{T}M=I$. (Such a matrix is called orthogonal.) It is easy enough to check that this is a group. To see that it is a Lie group, we first need to make sure that it is a manifold. To that end, we will parameterize it. Calling the entries of the matrix $a,b,c,d$, the condition becomes $\begin{pmatrix}0&1\\ 1&0\end{pmatrix}=\begin{pmatrix}a&b\\ c&d\end{pmatrix}^{T}\begin{pmatrix}a&b\\ c&d\end{pmatrix}=\begin{pmatrix}a^{2}+c^{2}&ab+cd\\ ab+cd&b^{2}+d^{2}\end{pmatrix}$ which is equivalent to the following system of equations: $\displaystyle a^{2}+c^{2}$ $\displaystyle=1$ $\displaystyle ab+cd$ $\displaystyle=0$ $\displaystyle b^{2}+d^{2}$ $\displaystyle=1$ The first of these equations can be solved by introducing a parameter $\theta$ and writing $a=\cos\theta$ and $c=\sin\theta$. Then the second equation becomes $b\cos\theta+d\sin\theta=0$, which can be solved by introducing a parameter $r$: $\displaystyle b$ $\displaystyle=-r\sin\theta$ $\displaystyle d$ $\displaystyle=r\cos\theta$ Substituting this into the third equation results in $r^{2}=1$, so $r=-1$ or $r=+1$. This means we have two matrices for each value of $\theta$: $\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}\qquad\begin{pmatrix}\cos\theta&\sin\theta\\ \sin\theta&-\cos\theta\end{pmatrix}$ Since more than one value of $\theta$ will produce the same matrix, we must restrict the range in order to obtain a bona fide coordinate. Thus, we may cover $O(2)$ with an atlas consisting of four neighborhoods: $\displaystyle\left\{\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}\mid-{3\over 4}\pi<\theta<{3\over 4}\pi\right\}$ $\displaystyle\left\{\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}\mid{1\over 4}\pi<\theta<{7\over 4}\pi\right\}$ $\displaystyle\left\{\begin{pmatrix}\cos\theta&\sin\theta\\ \sin\theta&-\cos\theta\end{pmatrix}\mid-{3\over 4}\pi<\theta<{3\over 4}\pi\right\}$ $\displaystyle\left\{\begin{pmatrix}\cos\theta&\sin\theta\\ \sin\theta&-\cos\theta\end{pmatrix}\mid{1\over 4}\pi<\theta<{7\over 4}\pi\right\}$ Every element of $O(2)$ must belong to at least one of these neighborhoods. It its trivial to check that the transition functions between overlapping coordinate patches are Title O(2) O2 2013-03-22 17:57:38 2013-03-22 17:57:38 rspuzio (6075) rspuzio (6075) 8 rspuzio (6075) Example msc 22E10 msc 22E15
proofpile-shard-0030-155
{ "provenance": "003.jsonl.gz:156" }
0 Research Papers # Optimal Product Design Under Price Competition [+] Author and Article Information Ching-Shin Norman Shiau Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA [email protected] Jeremy J. Michalek Department of Mechanical Engineering and Department of Engineering and Public Policy, Carnegie Mellon University, Pittsburgh, PA [email protected] For a noncooperative game with complete information, a Nash equilibrium exists if: (1) the strategy set is nonempty, compact, and convex for each player; (2) the payoff function is defined, continuous, and bounded; and (3) each individual payoff function is concave with respect to individual strategy (17). More specifically, Anderson et al. (21) proved that there exists a unique price equilibrium under logit demand when the profit function is strictly quasiconcave. Note that the objective function of the NLP form is not needed to identify points that satisfy Nash necessary conditions; however, in practice including the objective of producer $k$ can help to also enforce (local) sufficiency conditions for producer $k$. Sufficiency for competitors must be determined post-hoc. The formulation should be distinguished from equilibrium problems with equilibrium constraints (EPECs) (23) since no separate upper and lower level equilibria exist and the focal firm is in Nash price competition with competitors. CDH (2) used a duopoly game to prove that a Stackelberg leader model can always receive at least as high a payoff as a Nash model if a Stackelberg equilibrium exists. For the cases of multiple local optima and price equilibria, multistart can be implemented to identify solutions. Discrete decision variables cannot be implemented in the Nash formulation (Eq. 3) since KKT conditions assume continuity. For Stackelberg, price-equilibrium profit is calculated from Stackelberg pricing. The values of aspirin substitute are the weighted combination of acetaminophen and ibuprofen. The numbers are not provided in the original paper (2), and we obtained the attribute data from the mixed complementarity programming library (MCPLIB) (37) and verified with the original author. The data of consumer preference weightings (30 individuals) are also included in that library. The derivations of all FOC equations in this paper are included in a separate supporting information document that is available by contacting the authors. We use $t=10−9$ for all the cases. CDH (2) compared their Stackelberg solution to the optimal new product solution with competitors fixed at Nash prices (suboptimal solution) and concluded Stackelberg resulted in higher profit. However, the comparison for the two models should base on fully converged equilibrium solutions. We use multi-start to search for all stationary points in the feasible domain and perform post hoc Nash best response verification (Eq. 1). We found only one unique Stackelberg solution. The elements in the $Z∗$ and $Z$ vectors are dimensionless and normalized to upper and lower bounds of each variable. $Z∗$ is obtained by using the proposed method with a convergence tolerance $10−15$. The computer system setup comprises of OS: Windows XP; CPU: Intel Core2 2.83Hz; RAM: 2.0 Gbyte; and solver: active-set SQP algorithm in MATLAB R2008a. The BONMIN MINLP solver implements multiple algorithms to solve optimization problems with continuous and discrete variables (31). It is a local solver, and the solutions shown in the article are local optima found by multistart. We do not compare computational cost or test the CDH method in this case because active price bounds make price solutions trivial. J. Mech. Des 131(7), 071003 (Jun 04, 2009) (10 pages) doi:10.1115/1.3125886 History: Received October 09, 2008; Revised March 23, 2009; Published June 04, 2009 ## Abstract Engineering optimization methods for new product development model consumer demand as a function of product attributes and price in order to identify designs that maximize expected profit. However, prior approaches have ignored the ability of competitors to react to a new product entrant. We pose an approach to new product design accounting for competitor pricing reactions by imposing Nash and Stackelberg conditions as constraints, and we test the method on three product design case studies from the marketing and engineering design literature. We find that new product design under Stackelberg and Nash equilibrium cases are superior to ignoring competitor reactions. In our case studies, ignoring price competition results in suboptimal design and overestimation of profits by 12–79%, and we find that a product that would perform well in today’s market may perform poorly in the market that the new product will create. The efficiency, convergence stability, and ease of implementation of the proposed approach enable practical implementation for new product design problems in competitive market systems. <> ## Figures Figure 1 Computational time versus solution error for the painkiller problem: (a) Nash case and (b) Stackelberg case Figure 2 Computational time versus solution error for the weight scale problem: (a) Nash case (b) Stackelberg case Figure 3 Price part-worth fitting functions for the angle grinder demand model ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
proofpile-shard-0030-156
{ "provenance": "003.jsonl.gz:157" }
fit() and fit_xy() take a model specification, translate the required code by substituting arguments, and execute the model fit routine. # S3 method for model_spec fit(object, formula, data, control = control_parsnip(), ...) # S3 method for model_spec fit_xy(object, x, y, control = control_parsnip(), ...) ## Arguments object An object of class model_spec that has a chosen engine (via set_engine()). An object of class formula (or one that can be coerced to that class): a symbolic description of the model to be fitted. Optional, depending on the interface (see Details below). A data frame containing all relevant variables (e.g. outcome(s), predictors, case weights, etc). Note: when needed, a named argument should be used. A named list with elements verbosity and catch. See control_parsnip(). Not currently used; values passed here will be ignored. Other options required to fit the model should be passed using set_engine(). A matrix, sparse matrix, or data frame of predictors. Only some models have support for sparse matrix input. See parsnip::get_encoding() for details. x should have column names. A vector, matrix or data frame of outcome data. ## Value A model_fit object that contains several elements: • lvl: If the outcome is a factor, this contains the factor levels at the time of model fitting. • spec: The model specification object (object in the call to fit) • fit: when the model is executed without error, this is the model object. Otherwise, it is a try-error object with the error message. • preproc: any objects needed to convert between a formula and non-formula interface (such as the terms object) The return value will also have a class related to the fitted model (e.g. "_glm") before the base class of "model_fit". ## Details fit() and fit_xy() substitute the current arguments in the model specification into the computational engine's code, check them for validity, then fit the model using the data and the engine-specific code. Different model functions have different interfaces (e.g. formula or x/y) and these functions translate between the interface used when fit() or fit_xy() was invoked and the one required by the underlying model. When possible, these functions attempt to avoid making copies of the data. For example, if the underlying model uses a formula and fit() is invoked, the original data are references when the model is fit. However, if the underlying model uses something else, such as x/y, the formula is evaluated and the data are converted to the required format. In this case, any calls in the resulting model objects reference the temporary objects used to fit the model. If the model engine has not been set, the model's default engine will be used (as discussed on each model page). If the verbosity option of control_parsnip() is greater than zero, a warning will be produced. set_engine(), control_parsnip(), model_spec, model_fit ## Examples # Although glm() only has a formula interface, different # methods for specifying the model can be used library(dplyr) #> #> Attaching package: ‘dplyr’#> The following objects are masked from ‘package:stats’: #> #> filter, lag#> The following objects are masked from ‘package:base’: #> #> intersect, setdiff, setequal, unionlibrary(modeldata) data("lending_club") lr_mod <- logistic_reg() using_formula <- lr_mod %>% set_engine("glm") %>% fit(Class ~ funded_amnt + int_rate, data = lending_club) using_xy <- lr_mod %>% set_engine("glm") %>% fit_xy(x = lending_club[, c("funded_amnt", "int_rate")], y = lending_club\$Class) using_formula #> parsnip model object #> #> Fit time: 33ms #> #> Call: stats::glm(formula = Class ~ funded_amnt + int_rate, family = stats::binomial, #> data = data) #> #> Coefficients: #> (Intercept) funded_amnt int_rate #> 5.131e+00 2.767e-06 -1.586e-01 #> #> Degrees of Freedom: 9856 Total (i.e. Null); 9854 Residual #> Null Deviance: 4055 #> Residual Deviance: 3698 AIC: 3704using_xy #> parsnip model object #> #> Fit time: 31ms #> #> Call: stats::glm(formula = ..y ~ ., family = stats::binomial, data = data) #> #> Coefficients: #> (Intercept) funded_amnt int_rate #> 5.131e+00 2.767e-06 -1.586e-01 #> #> Degrees of Freedom: 9856 Total (i.e. Null); 9854 Residual #> Null Deviance: 4055 #> Residual Deviance: 3698 AIC: 3704
proofpile-shard-0030-157
{ "provenance": "003.jsonl.gz:158" }
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Representations of Algebraic Groups: Second Edition Jens Carsten Jantzen, Aarhus University, Denmark SEARCH THIS BOOK: Mathematical Surveys and Monographs 2003; 576 pp; softcover Volume: 107 ISBN-10: 0-8218-4377-X ISBN-13: 978-0-8218-4377-2 List Price: US$104 Member Price: US$83.20 Order Code: SURV/107.S Back in print from the AMS, the first part of this book is an introduction to the general theory of representations of algebraic group schemes. Here, Janzten describes important basic notions: induction functors, cohomology, quotients, Frobenius kernels, and reduction mod $$p$$, among others. The second part of the book is devoted to the representation theory of reductive algebraic groups and includes topics such as the description of simple modules, vanishing theorems, the Borel-Bott-Weil theorem and Weyl's character formula, and Schubert schemes and line bundles on them. This is a significantly revised edition of a modern classic. The author has added nearly 150 pages of new material describing later developments and has made major revisions to parts of the old text. It continues to be the ultimate source of information on representations of algebraic groups in finite characteristics. The book is suitable for graduate students and research mathematicians interested in algebraic groups and their representations. Graduate students and research mathematicians interested in algebraic groups and their representations. Reviews From reviews of the first edition: "Very readable ... meant to give its reader an introduction to the representation theory of reductive algebraic groups ..." -- Zentralblatt MATH "Those familiar with [Jantzen's previous] works will approach this new book ... with eager anticipation. They will not be disappointed, as the high standard of the earlier works is not only maintained but exceeded ... very well written and the author has taken great care over accuracy both of mathematical details and in references to the work of others. The discussion is well motivated throughout ... This impressive and wide ranging volume will be extremely useful to workers in the theory of algebraic groups ... a readable and scholarly book." -- Mathematical Reviews Part I. General theory • Schemes • Group schemes and representations • Induction and injective modules • Cohomology • Quotients and associated sheaves • Factor groups • Algebras of distributions • Representations of finite algebraic groups • Representations of Frobenius kernels • Reduction mod $$p$$ Part II. Representations of reductive groups • Reductive groups • Simple $$G$$-modules • Irreducible representations of the Frobenius kernels • Kempf's vanishing theorem • The Borel-Bott-Weil theorem and Weyl's character formula • The translation functors • Filtrations of Weyl modules • Representations of $$G_rT$$ and $$G_rB$$ • Geometric reductivity and other applications of the Steinberg modules • Injective $$G_r$$-modules • Cohomology of the Frobenius kernels • Schubert schemes • Line bundles on Schubert schemes • Truncated categories and Schur algebras • Results over the integers • Lusztig's conjecture and some consequences • Radical filtrations and Kazhdan-Lusztig polynomials • Tilting modules • Frobenius splitting • Frobenius splitting and good filtrations • Representations of quantum groups • References • List of notations • Index
proofpile-shard-0030-158
{ "provenance": "003.jsonl.gz:159" }
# Proving an Algorithm that generates minimal $\|x\|_0$ for the underdetermined system $Ax=b$ Let $A \in \mathbb {F}^{m \times n}$ with $m< n,$ $b \in \mathbb{F}^m$ and let $x$ be unknown in $\mathbb{F}^n.$ Assume $0<p<1.$ Then $$\arg \min\limits_{x: Ax=b} \|x\|_0 = \lim\limits_{p \to 0} \{\arg \min\limits_{x:Ax=b} \|x\|_p^p\}$$ The reason I'm asking this question is that I am anticipating it as a possible objection that could be raised during my final defense. If this question I ask above is, in fact, true, I can take care of it easily by showing that attempts to "approximate" $\|x\|_0$ for an underdetermined system of linear equations via taking examples from a sequence of minimizers decreasing in $p$ to $0$ is itself intractable, because simply finding $\arg \min\limits_{x: Ax=b} \|x\|_p^p$ is itself NP Hard (via reducing from the partition problem as given in Foucart). My problem, however, is whether this should even be a concern; is the limit above valid, or, if the way I stated this is in error, could it slightly be modified to be valid? I know that $$\lim\limits_{p \to 0} \|x\|_p^p = \|x\|_0$$ (crudely, nonzero numbers proceed to 1 while zeros stay zero) but i do not know how to show this rigorously as my analysis is rusty, and I do not know whether even showing this part would provide aid to my main problem at all. Any help would be deeply appreciated! • It is not true that $\|x \|_p \to \|x\|_0$, as can be seen by the different scaling behaviour on both sides. What is true is $\|x\|_p^p \to \|x\|_0$. – PhoemueX Mar 12 '17 at 9:06 • I would expect this to be true: under some strong conditions on the matrix, such as if $\delta_{2s+2} < 1$ then for $p \leq p^*$, with $p^*$ small enough (depending on $\delta$ I suppose), the recovered solution to the $\ell_p$ constrained problem is indeed the sparsest, see Thm 2.1 and discussions math.tamu.edu/~foucart/publi/FL08final.pdf . This would need to be written nicely though, since I'm not sure how the application $p \mapsto x^*_p$ behaves at $p = 0$. In general (i.e. non compressed sensing problems) I do not expect this to be true. – Jean-Luc Bouchot Mar 12 '17 at 9:34 • PhoemueX, thanks for pointing that out! It has been fixed. – Thomas Rasberry Mar 13 '17 at 18:46 • I am missing something here... you are asking whether the result in the gray box is true? You have written it in your thesis but you haven't proved it nor provided a reference, and now you are afraid that the committee will object during the defense? Is this correct? – Federico Poloni Mar 13 '17 at 19:22 • Yes, I am asking whether this is true. I have not written it in my thesis since I can neither prove nor source it, but merely wonder if the question ought to be anticipated during my final oral exam, and if so, how to prepare. From what I have seen here, the answer is "no," except perhaps under very specific conditions. – Thomas Rasberry Mar 13 '17 at 19:33
proofpile-shard-0030-159
{ "provenance": "003.jsonl.gz:160" }
Edu 372 Week 2 Assignment # Edu 372 Week 2 Assignment S 0 points edu Week 2 - Assignment Applied Questions Respond to at least three of the questions listed below. Your response must be in proper APA format and include evidence from the text and at least one other scholarly resource to support your answers. Your response should be no more than five pages in length (not including title and reference pages). 1. List some of the educational implications of the views of intelligence advanced by Cattell, Sternberg, and Gardner? 2. Explain why correlation does not prove causation. 3. Debate the merits and educational implications of the belief that intelligence is modifiable. 4. Using library resources, research the proposition that measured intelligence is related to family size, and birth order. 5. How might lessons be modified to encourage... Edu 372 solarc S 0 points #### Oh Snap! This Answer is Locked Thumbnail of first page Excerpt from file: edu Week2Assignment AppliedQuestions Respondtoatleastthreeofthequestionslistedbelow.Yourresponsemustbein properAPAformatandincludeevidencefromthetextandatleastoneother scholarlyresourcetosupportyouranswers.Yourresponseshouldbenomorethan fivepagesinlength(notincludingtitleandreferencepages).... Filename: edu-372-week-2-assignment-76.docx Filesize: < 2 MB Print Length: 4 Pages/Slides Words: 105 Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$ Use LaTeX to type formulas and markdown to format text. See example.
proofpile-shard-0030-160
{ "provenance": "003.jsonl.gz:161" }
This can only help the process. They are often the ones that we want. is not completely factored because the second factor can be further factored. which, on the surface, appears to be different from the first form given above. Track your scores, create tests, and take your learning to the next level! And we’re done. With the help of the community we can continue to Note that this converting to $$u$$ first can be useful on occasion, however once you get used to these this is usually done in our heads. Again, we can always check that we got the correct answer by doing a quick multiplication. Algebra 1: Factoring Practice. Send your complaint to our designated agent at: Charles Cohn Practice for the Algebra 1 SOL: Topic: Notes: Quick Check [5 questions] More Practice [10-30 questions] 1: Properties So, this must be the third special form above. the 3u4 – 24uv3 = 3u(u3 – 8v3) = 3u[u3 – (2v)3]. Here is the factored form of the polynomial. Thus, we can rewrite  as  and it follows that. Thus  and must be and , making the answer  . Factoring polynomials is done in pretty much the same manner. Here is the factoring for this polynomial. So, without the “+1” we don’t get the original polynomial! There is a 3$$x$$ in each term and there is also a $$2x + 7$$ in each term and so that can also be factored out. Now, notice that we can factor an $$x$$ out of the first grouping and a 4 out of the second grouping. We set each factored term equal to zero and solve. However, there are some that we can do so let’s take a look at a couple of examples. For our example above with 12 the complete factorization is. Multiply: 6 :3 2−7 −4 ; Factor by GCF: 18 3−42 2−24 Example B. Learn. Since this equation is factorable, I will present the factoring method here. Here they are. Varsity Tutors. an In this case we will do the same initial step, but this time notice that both of the final two terms are negative so we’ll factor out a “-” as well when we group them. on or linked-to by the Website infringes your copyright, you should consider first contacting an attorney. To do this we need the “+1” and notice that it is “+1” instead of “-1” because the term was originally a positive term. When we factor the “-” out notice that we needed to change the “+” on the fourth term to a “-”. So we know that the largest exponent in a quadratic polynomial will be a 2. Monomials and polynomials. your copyright is not authorized by law, or by the copyright owner or such owner’s agent; (b) that all of the misrepresent that a product or activity is infringing your copyrights. Doing this gives us. Take the two numbers –3 and 4, and put them, complete with … Factor: rewrite a number or expression as a product of primes; e.g. A1 7.9 Notes: Factoring special products Difference of Two squares Pattern: 2 − 2 = ( + )( − ) Ex: 2 − 9 = 2 − 32 We notice that each term has an $$a$$ in it and so we “factor” it out using the distributive law in reverse as follows. and so we know that it is the fourth special form from above. In this final step we’ve got a harder problem here. This is a method that isn’t used all that often, but when it can be used it can … For all polynomials, first factor out the greatest common factor (GCF). In our problem, a = u and b = 2v: This is a difference of squares. The first method for factoring polynomials will be factoring out the greatest common factor. If it is anything else this won’t work and we really will be back to trial and error to get the correct factoring form. Upon multiplying the two factors out these two numbers will need to multiply out to get -15. Comparing this generic expression to the one given in the probem, we can see that the  term should equal , and the  term should equal 2. The correct pair of numbers must add to get the coefficient of the $$x$$ term. Of all the topics covered in this chapter factoring polynomials is probably the most important topic. An identification of the copyright claimed to have been infringed; On the other hand, Algebra … Now that we’ve done a couple of these we won’t put the remaining details in and we’ll go straight to the final factoring. 10 … Rewriting the equation as , we can see there are four terms we are working with, so factor by grouping is an appropriate method. We can actually go one more step here and factor a 2 out of the second term if we’d like to. If Varsity Tutors takes action in response to Also note that we can factor an $$x^{2}$$ out of every term. Help with WORD PROBLEMS: Algebra I Word Problem Template Word Problem Study Tip for solving System WPs Chapter 1 Acad Alg 1 Chapter 1 Notes Alg1 – 1F Notes (function notation) 1.5 HW (WP) answers Acad. 1 … Spell. The numbers 1 and 2 satisfy these conditions: Now, look to see if there are any common factors that will cancel: The  in the numerator and denominator cancel, leaving . Remember that we can always check by multiplying the two back out to make sure we get the original. The process of factoring a real number involves expressing the number as a product of prime factors. This is a double-sided notes page that helps the students factor a trinomial where a > 1 intuitively. 58 Algebra Connections Parent Guide FACTORING QUADRATICS 8.1.1 and 8.1.2 Chapter 8 introduces students to quadratic equations. In this case let’s notice that we can factor out a common factor of $$3{x^2}$$ from all the terms so let’s do that first. In this case all that we need to notice is that we’ve got a difference of perfect squares. At this point we can see that we can factor an $$x$$ out of the first term and a 2 out of the second term. In this case we group the first two terms and the final two terms as shown here. The difference of cubes formula is a3 – b3 = (a – b)(a2 + ab + b2). Factor polynomials on the form of x^2 + bx + c. Factor … Algebra 1 : Factoring Polynomials Study concepts, example questions & explanations for Algebra 1. Here is the work for this one. Multiply: :3 2−1 ; :7 +6 ; Factor … Next, we need all the factors of 6. Factoring is also the opposite of Expanding: means of the most recent email address, if any, provided by such party to Varsity Tutors. The greatest common factor is the largest factor shared by both of the numbers: 45. In this case we’ve got three terms and it’s a quadratic polynomial. Doing this gives. This is exactly what we got the first time and so we really do have the same factored form of this polynomial. We will need to start off with all the factors of -8. Match. In this case we can factor a 3$$x$$ out of every term. Okay, this time we need two numbers that multiply to get 1 and add to get 5. This will be the smallest number that can be divided by both 5 and 15: 15. There are some nice special forms of some polynomials that can make factoring easier for us on occasion. First, find the factors of 90 and 315. Please follow these steps to file a notice: A physical or electronic signature of the copyright owner or a person authorized to act on their behalf; © 2007-2020 All Rights Reserved. Now, we can just plug these in one after another and multiply out until we get the correct pair. Solving equations & inequalities. Menu Algebra 1 / Factoring and polynomials. Algebra 1 is the second math course in high school and will guide you through among other things expressions, systems of equations, functions, real numbers, inequalities, exponents, polynomials, radical and rational expressions.. The coefficient of the $${x^2}$$ term now has more than one pair of positive factors. Polynomial equations in factored form. When factoring in general this will also be the first thing that we should try as it will often simplify the problem. Here is the factored form for this polynomial. So, why did we work this? To finish this we just need to determine the two numbers that need to go in the blank spots. For example, 2, 3, 5, and 7 are all examples of prime numbers. sufficient detail to permit Varsity Tutors to find and positively identify that content; for example we require The zero product property states … a In factoring out the greatest common factor we do this in reverse. View A1 7.9 Notes.pdf from ALGEBRA 1 SEMESTER 2 APEX 1B at Lamar High School. However, it works the same way. A difference of squares binomial has the given factorization: . This set includes the following types of factoring (just one type of factoring … Ms. Ulrich's Algebra 1 Class: Home Algebra 1 Algebra 1 Projects End of Course Review More EOC Practice Activities UPSC Student Blog Polynomials Unit Notes ... polynomials_-_day_3_notes.pdf: File Size: 66 kb: File Type: pdf: Download File. as They can be a pain to remember, but pat yourself on the back for getting to such hard questions! So, if you can’t factor the polynomial then you won’t be able to even start the problem let alone finish it. Let’s flip the order and see what we get. This is a method that isn’t used all that often, but when it can be used it can be somewhat useful. To be honest, it might have been easier to just use the general process for factoring quadratic polynomials in this case rather than checking that it was one of the special forms, but we did need to see one of them worked. Note that we can always check our factoring by multiplying the terms back out to make sure we get the original polynomial. Here is the complete factorization of this polynomial. Then, find the least common multiple of 5 and 15. either the copyright owner or a person authorized to act on their behalf. Remember that the distributive law states that. You appear to be on a device with a "narrow" screen width (, Derivatives of Exponential and Logarithm Functions, L'Hospital's Rule and Indeterminate Forms, Substitution Rule for Indefinite Integrals, Volumes of Solids of Revolution / Method of Rings, Volumes of Solids of Revolution/Method of Cylinders, Parametric Equations and Polar Coordinates, Gradient Vector, Tangent Planes and Normal Lines, Triple Integrals in Cylindrical Coordinates, Triple Integrals in Spherical Coordinates, Linear Homogeneous Differential Equations, Periodic Functions & Orthogonal Functions, Heat Equation with Non-Zero Temperature Boundaries, Absolute Value Equations and Inequalities, $$9{x^2}\left( {2x + 7} \right) - 12x\left( {2x + 7} \right)$$. Also note that in this case we are really only using the distributive law in reverse. Each term contains and $$x^{3}$$ and a $$y$$ so we can factor both of those out. When you have to have help on mixed … The notes … Again, let’s start with the initial form. Note that the method we used here will only work if the coefficient of the $$x^{2}$$ term is one. This method is best illustrated with an example or two. 6 = 2 ∙ 3 In algebra, factor by rewriting a polynomial as a product of lower-degree polynomials In the example above, (x + 1)(x – 2) is the … To yield the final term in our original equation (), we can set  and . Note as well that we further simplified the factoring to acknowledge that it is a perfect square. We determine all the terms that were multiplied together to get the given polynomial. Between the first two terms, the Greatest Common Factor (GCF) is  and between the third and fourth terms, the GCF is 4. Pick a pair plug them in and see what we get the correct pair some nice special forms of polynomials! Of 2 Hunter College, Master of Arts, Chemistry to check that the two will! By multiplying the two numbers that multiply to get 5 is not completely since... ’ s all that often, but these are representative of many of.! Found in the blank spots written in the trinomial common method of numbers... S a quadratic polynomial present the factoring of factoring notes algebra 1 polynomial is that is reason! We would have had to use “ -1 ” than observing the values of and! T mean that we further simplified the factoring must take the form of techniques... Get that the equation has been factored, we will need to do some factoring... Way of doing it be written in the first factor and from second! Quadratic trinomials into two first degree ( hence forth linear ) polynomials out factoring notes algebra 1 we simply ’! Another and multiply out to make a certain polynomial georgia Institute of Technology-Main... CUNY City,... Are here equal to zero and solve get 24 and add to get 1 and itself since is! ; factor by GCF: 18 3−42 2−24 example b: we can check. In later chapters where the first time and so we know that the factoring to acknowledge that is... Means that the factoring ab + b2 ) best illustrated with an or... As it will often simplify the problem factoring_-_day_1_notes.pdf: File Type::! Words, these two numbers that multiply to get -10 the more common mistakes with factoring notes algebra 1 of. & explanations for Algebra 1 quadratic functions & equations Solving quadratics by factoring or using the distributive law reverse... First method for factoring things in this case we are really only the! Therefore, the greatest common factor is the fourth special form from above was! Required, let ’ s all that often, the greatest common factor do. Factoring problems is to completely factor a 3\ ( x\ ) for a factored expression order... Making the answer but these are representative of many of them Type of factoring numbers is to familiarize ourselves many. Terms back out to complete the problem both sides: Solving equations & inequalities Industrial.... A few s for the two factors on the other hand, Algebra … 58 Algebra Connections Parent factoring. More step here and we didn ’ t two integers that will this! Is, we need all the terms that were multiplied together to get 5 process by which we go determining. Of every term exponent in a quadratic polynomial so this quadratic doesn ’ t factor anymore and the. Students to quadratic equations 2 ( 10 ) =20 and this is a number into positive prime there! The party that made the content available or to third parties such as ChillingEffects.org, complete with … Solving &... To pick a pair plug them in and see what happens when we can still a... For a factored expression of order 2 is harder depends on the for. Important topic is a2 – b2 = ( a – b ) ( a2 + ab b2! A difference of squares multiply to get -10 not completely factored step we ’ d to! Off with all the topics covered in this case all that often, but these are representative of many them. S note that we ’ ve got the second term if we factoring notes algebra 1 got... Multiply out to see what we get the correct factoring of this section is to factoring notes. On occasion so don ’ t work all that there is no one method for factoring things in this we., in this case we are really only using the distributive law in.. Important because we could also have factored this as t prime are 4, 6 and.: Download File so, this must be factors of -15 off by working factoring. This one looks a little odd in comparison to the others coefficient of following! Of problems here and we didn ’ t forget to check both places for each pair see! General this will happen on occasion so don ’ t factor … these notes assist in. Check our factoring by multiplying the terms out terms and the final term in factor! Is \ ( x\ ) term ( a2 + ab + b2.... Factored this as for, we will need to go in the trinomial 2 10... What we get the original polynomial remember: factoring is also the of. ( u\ ) ’ s start this off by working a factoring a variable. The previous examples variable here since we ’ ve got three terms it... Be used it can be somewhat useful one looks a little odd in to... Get the given factorization: constant is a perfect square and its square root is.. 'Ve found an issue with this question, please let us know that doesn ’ factor., 3, 5, and put them, complete with … Solving equations & inequalities first term also! ) =20 and this is a number or expression as a product of 2 this Chapter factoring polynomials out greatest. Introduces students to quadratic equations factored however GCF ) illustrated with an example or two our factoring multiplying. T prime are 4, and put them into the product of.... Factoring in general get -10 terms as shown here the help of the is. Please let us know nice special forms of some polynomials that can be further factored Study,. That multiply to get -15 of Arts, Chemistry two terms and it ’ s the... Us know ) 3 ] common multiple of 5 and 15: 15 solve for either by factoring using! Will factor it out of every term the numbers in and see what we to! =20 and this is a difference of squares '': leading coefficient ≠ 1 any factoring... Algebra 2 is harder depends on the other hand, Algebra … 58 Algebra Connections Parent Guide factoring quadratics and! Factoring to acknowledge that it is the process of finding the factors would. Factoring: leading coefficient ≠ 1 is a3 – b3 = ( a – b ) a2!, making the answer one way of doing it group the first term in each factor be! Cuny Hunter College, Master of Arts, Chemistry for second degree.... T work all that often using only integers u\ ) ’ s plug the numbers and. One more step here and we didn ’ t cover all the factors of -6 the constant a... Factor can be the smallest number that can be the third term we! The topics covered in this Chapter factoring polynomials is probably the most important topic binomial! What happens when we multiply the terms out into the wrong spot complete is... Is not completely factored factoring polynomials Study concepts, example questions & explanations for 1. The correct pair of positive factors are 1 and itself only option is to completely factor a out. Can get that the first time we need two numbers with a sum of and! Term for second degree polynomial ) term factor -15 using only integers equation. Hard questions do have the same factored form of y=ax2+bx+c and, when Menu... S drop it and then multiply out to see what we got the first two terms and final... Second degree polynomial of this polynomial is completely factored because the second term we. Your learning to the others = ( a – b ) ( a b... Harder problem here numbers for the original polynomial in terms of \ ( { x^2 } )... Been a negative term originally we would have had to use “ -1 ” 90 and 315 as. No one method for doing these in one after another and multiply out until we simply can ’ t that. Wrong however is another trick that we can always check our factoring by can... Finish this we just put them into the product of 2, create tests, and take learning! Time and so the factored form of this polynomial is need to determine the two that! There are no tricks here or methods other than observing the values and. Appears to be different from the second factor we ’ ve got the correct pair of must. Zero and solve with … Solving equations & inequalities make factoring easier for us on occasion as they are.! Special cases will be the same manner of South Florida-Main Campus, Bachelor of,... And the final term in each factor must be one of the factoring take! Get the given polynomial little odd in comparison to the challenging questions the! One method for doing these in general this will be the first step to quadratics. Get 24 and add to get the original polynomial in terms of \ ( x\ ) ’ start... Are done for, we can factor a 2 two first degree hence! The opposite of Expanding: we can always distribute the “ +1 ” we don t! = 3u ( u3 – ( 2v ) 3 ] more than one pair of numbers for these... & … these notes assist students in factoring out the greatest common factor the! St Math Kickbox Level 4, Romeo Helicopter Price, Fire Code Violations In Homes, National Arts Club Jobs, Watford Champions League, Naturalisation Fee Refund, What Is Magneto, National Arts Club Jobs,
proofpile-shard-0030-161
{ "provenance": "003.jsonl.gz:162" }
# MRB constant proofs wanted $$C$$ MRB, the MRB constant, is defined at http://mathworld.wolfram.com/MRBConstant.html . There is an excellent 56 page paper whose author has passed away. You can find it in Google Scholar "MRB constant," Better yet, use the following link http://web.archive.org/web/20130430193005/http://www.perfscipress.com/papers/UniversalTOC25.pdf. You find a cached copy there. Just before the author, Richerd Crandall, died I wrote him about a possible small error. What I'm worried about is formula 44 on page 29 and below. When I naively worked formula 44 it needed a negative sign in front of it. Crandall did write me back admitting to a typo, but he died before he had a chance to correct it. Is there anyone out there competent enough to check, correct and prove the corrected formulas for me? Thank you. I will use the proofs often and try to get the formulas published more. Here is how I worked formula 44 and got -B: (*define the Dirichlet eta function*) eta[s_] := (1 - 2^(1 - s)) Zeta[s]; (*define the higher derivatives of the eta(0)*) a[i_] := Derivative[i][eta][0]; (*Define c:*) c[j_] := Sum[Binomial[j, d](-1)^dd^(j - d), {d, 1, j}] (*formula (44)*) N[Sum[c[m]/m!*a[m], {m, 1, 40}], 100] • Would it be possible for you to type in the formula from the paper, and indicate the suspected typo? I don't care to download a 56(-page?) paper just to check one formula. – Gerry Myerson Nov 2 '13 at 23:36 • @Gerry Myerson , I added the formula – Marvin Ray Burns Nov 3 '13 at 14:23 • Thanks. Not enough info there to tell --- in particular, you'd have to know what $\eta$ stands for. Sorry --- I hope someone else will download the paper, and give it a try. – Gerry Myerson Nov 3 '13 at 22:10 • That doesn't tell me what $\eta$ stands for. – Gerry Myerson Nov 4 '13 at 2:39 • The context indicates it is the Dirichlet eta function. – anon Nov 5 '13 at 16:03 The first equality can be reproduced by formally stating the double sum: write down the expanded exponantial series for each term in one row, sum then column-wise to get the derivatives of the $\eta()$ and sum then the $\eta()$-expressions. Here the $\eta(s)$ is the alternating $\zeta(s)$ and $\eta^{(m)}(s)$ the $m$'th derivative. $$\begin{array} {rclll} -\exp( {\log(1) \over 1})+1 & = & -{\log(1) \over 1} &-{\log(1)^2 \over 1^2 2!} &-{\log(1)^3 \over 1^3 3!} & - \cdots \\ +\exp( {\log(2) \over 2})-1 & = &+{\log(2) \over 2} &+{\log(2)^2 \over 2^2 2!} &+{\log(2)^3 \over 2^3 3!} & + \cdots \\ -\exp( {\log(3) \over 3})+1 & = &-{\log(3) \over 3} &-{\log(3)^2 \over 3^2 2!} &-{\log(3)^3 \over 3^3 3!} & - \cdots \\ \vdots \qquad & \vdots & \quad\vdots & \quad\vdots& \quad\vdots & \ddots \\ \hline \\ B \qquad & = & {\eta^{(1)}(1) \over 1!} &- {\eta^{(2)}(2) \over 2!} &+ {\eta^{(3)}(3) \over 3!} & - \cdots \end{array}$$ I've not yet the expansion into the formula with the derivatives of $\eta()$ at zero; perhaps I can supply that tomorrow. Numerical check In Pari/GP I get for the original sum $$B = \sum_{k=1}^\infty (-1)^k( \exp( \log(k)/k)-1)$$ B=sumalt(k=1,(-1)^k * (exp( log(k)/k) -1) well converging the value B=0.187859642462067120248517934054273230055903094900138786172005... as well for B = -sum(k=1,6,(-1)^k*aetad(k,k)/k!) (but to fewer digits). My user-defined function aetad(s,d) gives the d'th derivative of $\eta()$ at $s$ and I implemented that aeta() using the $\eta()$ / $\zeta()$ conversion formula aeta(s) = if(s==1,return(log(2));return( (1-2/2^s)*zeta(s) ) z= sum(k=0,d, (-1)^k*binomial(d,k)*aeta(s +(d/2-k)*h1)); return( z/h1^d); } I have also another implementation of the $\eta()$ and its derivatives which allows to access higher derivatives (in the above version the 10'th or even 20'th derivative were impractical). This version is also independent of the software-included zeta-function but uses Pari/GP's sumalt() procedure which allows to evaluate alternating (moderately) divergent series. My approach computes the taylor-expansion for the $\eta()$ function by default centered around zero but can easily be generalized to any center: fmt(200,12) \\ user defined; set internal precision to 200 digits, display to 12 digits default(seriesexpansion,24) \\ set expansion-limit for power series m_coeffs = vectorv(24); \\ vector for 24 aeta-taylor-series {for(m=0,23, tay_aeta = sumalt(k=0,(-1)^k/(1+k)^(x+m)); \\ taylorseries for aeta around m m_coeffs[1+m]= polcoeffs(tay_aeta,24); \\ store leading 24 taylorcoefficients ); m_coeffs = Mat(m_coeffs);} \\ make a matrix from the list of taylor series After that, the desired coefficients (the derivatives of the $\eta(m)$ ) nicely divided by the factorials, occur on the diagonal of the coefficients-matrix, and we only need sum them with alternating sign: B = sum (k=1,23, -(-1)^k*m_coeffs[1+k,1+k]) \\ 0.1878596424620671202485179340542732... the leading correct digits • Thank you! The double sum proof at top is surprisingly easy when you make an array like that! – Marvin Ray Burns Dec 24 '15 at 13:13 • what can you say about the expansion into the MRB constant formula with the derivatives of η() at zero, mentioned in formula 44 in the Crandall paper, lnk . – Marvin Ray Burns Nov 14 '18 at 15:02 • @MarvinRayBurns : the link does not work... – Gottfried Helms Nov 14 '18 at 17:23 • – Marvin Ray Burns Nov 15 '18 at 13:47 • @MarvinRayBurns - ahh, thank you. It'll take a time to chew on this, I've currently something else in my understanding-complicated-thing module... :-) – Gottfried Helms Nov 15 '18 at 13:58
proofpile-shard-0030-162
{ "provenance": "003.jsonl.gz:163" }
1 JEE Main 2018 (Offline) +4 -1 Seven identical circular planar disks, each of mass M and radius R are welded symmetrically as shown. The moment of inertia of the arrangement about the axis normal to the plane and passing through the point P is : A $${{181} \over 2}M{R^2}$$ B $${{55} \over 2}M{R^2}$$ C $${{19} \over 2}M{R^2}$$ D $${{73} \over 2}M{R^2}$$ 2 JEE Main 2018 (Offline) +4 -1 From a uniform circular disc of radius R and mass 9M, a small disc of radius R/3 is removed as shown in the figure. The moment of inertia of the remaining disc about an axis perpendicular to the plane of the disc and passing through centre of disc is : A $${{37} \over 9}M{R^2}$$ B $$4M{R^2}$$ C $${{40} \over 9}M{R^2}$$ D $$10M{R^2}$$ 3 JEE Main 2018 (Online) 15th April Evening Slot +4 -1 A thin uniform bar of length $$L$$ and mass $$8$$ m lies on a smooth horizontal table. Two point masses m and 2 m are moving in the same horizontal plane from opposite sides of the bar with speeds 2$$\upsilon$$ and $$\upsilon$$ respectively. The masses stick to the bar after collision at a distance $${L \over 3}$$ and $${L \over 6}$$ respectively from the center of the bar. If the br starts rotating about its center of mass as a result of collision, the angular speed of the bar will be : A $${v \over {5L}}$$ B $${6v \over {5L}}$$ C $${3v \over {5L}}$$ D $${v \over {6L}}$$ 4 JEE Main 2018 (Online) 15th April Evening Slot +4 -1 A thin rod MN, free to rotate in the vertical plane aboutthe fixed end N, is held horizontal. When the end M is released the speed of this end, when the rod makes an angle $$\alpha$$ with the horizontal, will be proportional to : (see figure) A $$\sqrt {\sin \alpha }$$ B $${\sin \alpha }$$ C $$\sqrt {\cos \alpha }$$ D $${\cos \alpha }$$ JEE Main Subjects Physics Mechanics Electricity Optics Modern Physics Chemistry Physical Chemistry Inorganic Chemistry Organic Chemistry Mathematics Algebra Trigonometry Coordinate Geometry Calculus EXAM MAP Joint Entrance Examination
proofpile-shard-0030-163
{ "provenance": "003.jsonl.gz:164" }
# C++ program to find sum of digits of a number until sum becomes single digit C++Server Side ProgrammingProgramming In this article, we will be discussing a program to find the sum of digits of a number until the sum itself becomes a single digit and cannot be done summation of further. For example, take the case of a number 14520. Adding the digits of this number we get 1 + 4 + 5 + 2 + 0 = 12. Since this is not a single digit number, we would further add the digits of the number received. Adding them we get, 1 + 2 = 3. Now, 3 is the final answer because it is a single digit number itself and its digits cannot be added further. To solve this, we would use the approach that the sum of digits of a number divisible by 9 is equal to 9 only. For the numbers that are not divisible by 9, we can divide them by 9 so as to get the remaining digit which would be the final sum of the given number. ## Example Live Demo #include<bits/stdc++.h> using namespace std; //function to check the divisibility by 9 int sum_digits(int n) { if (n == 0) return 0; else if (n%9 == 0) return 9; else return (n%9); } int main() { int x = 14520; cout<<sum_digits(x)<<endl; return 0; } ## Output 3 Published on 03-Oct-2019 11:54:06
proofpile-shard-0030-164
{ "provenance": "003.jsonl.gz:165" }
P p_bar annihilation 1. Nov 4, 2009 pastro In the first edition of Griffiths' Introduction to Elementary Particles, p. 129, I read: "In strong interactions, charge conjugation invariance requires, for example, that the energy distribution of the charged pions in the reaction p + p_bar -> $$\pi^+$$ + $$\pi^-$$ + $$\pi^0$$ should (on average) be identical." Griffiths gives reference C. Baltay et al, .Phys Rev Lett 15, 591, (1965). I looked it up. This paper appears to only uses the argument, it does not explain its origin. Could someone please explain how C-symmetry makes a prediction about the distribution in pion energies in this case? Does this example hint at a broader principle which makes a statement about the energy distribution of reaction products in the final state of a strong interaction which respects C-symmetry? Thanks! 2. Nov 4, 2009 clem The initial state is an eigenstate of C, so the final state will also be one. The operator C interchanges pi+ and pi-. 3. Nov 5, 2009 pastro So, is the principle the following: "The strong force respects charge conjugation. Because of this, in any strong force interaction where the initial and final states are their own charge conjugate and where the final state particles differ predominately by charge (the mass difference between the pion flavors is small) the final state particles should (on average) have energy equally distributed between the final elements because the strong force can't really "tell the difference" between the final state types of particles, so on average, equal energy should be given to all final state particles." Is that the right general line of reasoning? 4. Nov 5, 2009 clem There is no mass difference between pi+ and pi-, and the pi0 mass doesn't enter. "so on average, equal energy should be given to all final state particles." More than that, any measured distribution of pi+ must be the same as for pi-. pi0 can have a different distribution.
proofpile-shard-0030-165
{ "provenance": "003.jsonl.gz:166" }
# Find hamming distance between two Strings of equal length in Java I have a private class that I want to be able to find the shortest Hamming Distance between two Strings of equal length in Java. The private class holds a char[] and contains a method to compare against other char arrays. It returns true if there is only a hamming distance of one. The average length of the Strings used is 9 characters. They range from 1 to 24 characters. Is there a way to make the isDistanceOne(char[]) method go any faster? private class WordArray { char[] word; /** * Creates a new WordArray object. * * @param word The word to add to the WordArray. */ private WordArray(String word) { this.word = word.toCharArray(); } /** * Returns whether the argument is within a Hamming Distance of one from * the char[] contained in the WordArray. * * Both char[]s should be of the same length. * * @param otherWord The word to compare with this.word. * @return boolean. */ private boolean isDistanceOne(char[] otherWord) { int count = 0; for(int i = 0; i < otherWord.length; i++) { if (this.word[i] != otherWord[i]) count++; } return (count == 1); } } • How large are the Strings? The fastest solution would differ for many short Strings versus very large strings – dustytrash May 12 at 17:52 • @dustytrash The range is from 1 to 24. The average length is 9. I've edited my question to mention this. – LuminousNutria May 12 at 18:04 Given the limited context, and no information about where the hotspot is in the code, it's difficult to give concrete advice. Here are some musings for your consideration: For ease of reading, it's preferable to have whitespace after control flow keywords and before the (. It is suggested to always include curly braces, even when they're not required by the compiler. Use final where possible to reduce cognitive load on the readers of your code. word should be private. There's no apparent reason to use char[] instead of just keeping a pointer to the original String. They're costing you time and space to make, to no benefit. You can short-circuit out of your for loop if the count ever becomes greater than one. Unless a significant fraction of your inputs have a distance of one, you should see some performance gain here. Using a boolean instead of an int might make a very small difference in execution time, but that would need to be tested. It also makes the code harder to read. private class WordArray { private final String word; private WordArray(final String word) { this.word = word; } private boolean isDistanceOne(final char[] otherWord) { assert word.length() == otherWord.length; int distance = 0; for (int i = 0; i < otherWord.length; i++) { if (this.word.charAt(i) == otherWord[i]) { continue; } if (distance > 0) { return false; } distance++; } return distance == 1; } } I don't think you can beat that linear complexity since you need to look at each character to determine the Hamming distance. One small optimization you can do is to short-circuit once your count goes above one, but that adds an extra check in every iteration, so it might have worse runtime depending on the inputs.
proofpile-shard-0030-166
{ "provenance": "003.jsonl.gz:167" }
libinput test suite The libinput test suite is based on Check and runs automatically during make check. # Permissions required to run tests Most tests require the creation of uinput devices and access to the resulting /dev/input/eventX nodes. Some tests require temporary udev rules. This usually requires the tests to be run as root. # Selective running of tests litest's tests are grouped by test groups and devices. A test group is e.g. "touchpad:tap" and incorporates all tapping-related tests for touchpads. Each test function is (usually) run with one or more specific devices. The --list commandline argument shows the list of suites and tests. $./test/test-device --list device:wheel: wheel only blackwidow device:invalid devices: no device device:group: no device logitech trackball MS surface cover mouse_roccat wheel only blackwidow ... In the above example, the "device:wheel" suite is run for the "wheel only" and the "blackwidow" device. Both devices are automatically instantiated through uinput by litest. The "no device" entry signals that litest does not instantiate a uinput device for a specific test (though the test itself may instantiate one). The --filter-test argument enables selective running of tests through basic shell-style function name matching. For example:$ ./test/test-touchpad --filter-test="*1fg_tap*" The --filter-device argument enables selective running of tests through basic shell-style device name matching. The device names matched are the litest-specific shortnames, see the output of --list. For example: $./test/test-touchpad --filter-device="synaptics*" The --filter-group argument enables selective running of test groups through basic shell-style test group matching. The test groups matched are litest-specific test groups, see the output of --list. For example:$ ./test/test-touchpad --filter-group="touchpad:*hover*" The --filter-device and --filter-group arguments can be combined with --list to show which groups and devices will be affected. # Controlling test output Each test supports the --verbose commandline option to enable debugging output, see libinput_log_set_priority() for details. The LITEST_VERBOSE environment variable, if set, also enables verbose mode. $./test/test-device --verbose$ LITEST_VERBOSE=1 make check
proofpile-shard-0030-167
{ "provenance": "003.jsonl.gz:168" }
# Ohm Last updated Ohm A laboratory one-ohm standard resistor, circa 1917. General information Unit system SI derived unit Unit of Electrical resistance SymbolΩ Named after Georg Ohm In SI base units: kgm 2s −3A −2 The ohm (symbol: Ω) is the SI derived unit of electrical resistance, named after German physicist Georg Simon Ohm. Although several empirically derived standard units for expressing electrical resistance were developed in connection with early telegraphy practice, the British Association for the Advancement of Science proposed a unit derived from existing units of mass, length and time and of a convenient size for practical work as early as 1861. The definition of the ohm was revised several times. Today, the definition of the ohm is expressed from the quantum Hall effect. Omega is the 24th and last letter of the Greek alphabet. In the Greek numeric system/Isopsephy (Gematria), it has a value of 800. The word literally means "great O", as opposed to omicron, which means "little O". SI derived units are units of measurement derived from the seven base units specified by the International System of Units (SI). They are either dimensionless or can be expressed as a product of one or more of the base units, possibly scaled by an appropriate power of exponentiation. The quantum Hall effect is a quantum-mechanical version of the Hall effect, observed in two-dimensional electron systems subjected to low temperatures and strong magnetic fields, in which the Hall conductance σ undergoes quantum Hall transitions to take on the quantized values ## Definition The ohm is defined as an electrical resistance between two points of a conductor when a constant potential difference of one volt, applied to these points, produces in the conductor a current of one ampere, the conductor not being the seat of any electromotive force. [1] The volt is the derived unit for electric potential, electric potential difference (voltage), and electromotive force. It is named after the Italian physicist Alessandro Volta (1745–1827). The ampere, often shortened to "amp", is the base unit of electric current in the International System of Units (SI). It is named after André-Marie Ampère (1775–1836), French mathematician and physicist, considered the father of electrodynamics. Electromotive force, abbreviated emf, is the electrical action produced by a non-electrical source. A device that converts other forms of energy into electrical energy, such as a battery or generator, provides an emf as its output. Sometimes an analogy to water "pressure" is used to describe electromotive force. ${\displaystyle \Omega ={\dfrac {\text{V}}{\text{A}}}={\dfrac {1}{\text{S}}}={\dfrac {\text{W}}{{\text{A}}^{2}}}={\dfrac {{\text{V}}^{2}}{\text{W}}}={\dfrac {\text{s}}{\text{F}}}={\dfrac {\text{H}}{\text{s}}}={\dfrac {{\text{J}}{\cdot }{\text{s}}}{{\text{C}}^{2}}}={\dfrac {{\text{kg}}{\cdot }{\text{m}}^{2}}{{\text{s}}{\cdot }{\text{C}}^{2}}}={\dfrac {\text{J}}{{\text{s}}{\cdot }{\text{A}}^{2}}}={\dfrac {{\text{kg}}{\cdot }{\text{m}}^{2}}{{\text{s}}^{3}{\cdot }{\text{A}}^{2}}}}$ in which the following units appear: volt (V), ampere (A), siemens (S), watt (W), second (s), farad (F), henry (H), joule (J), kilogram (kg), metre (m), and coulomb (C). The siemens is the derived unit of electric conductance, electric susceptance, and electric admittance in the International System of Units (SI). Conductance, susceptance, and admittance are the reciprocals of resistance, reactance, and impedance respectively; hence one siemens is redundantly equal to the reciprocal of one ohm, and is also referred to as the mho. The 14th General Conference on Weights and Measures approved the addition of the siemens as a derived unit in 1971. The watt is a unit of power. In the International System of Units (SI) it is defined as a derived unit of 1 joule per second, and is used to quantify the rate of energy transfer. In dimensional analysis, power is described by . The second is the base unit of time in the International System of Units (SI), commonly understood and historically defined as ​186400 of a day – this factor derived from the division of the day first into 24 hours, then to 60 minutes and finally to 60 seconds each. Analog clocks and watches often have sixty tick marks on their faces, representing seconds, and a "second hand" to mark the passage of time in seconds. Digital clocks and watches often have a two-digit seconds counter. The second is also part of several other units of measurement like meters per second for velocity, meters per second per second for acceleration, and per second for frequency. In many cases the resistance of a conductor in ohms is approximately constant within a certain range of voltages, temperatures, and other parameters. These are called linear resistors. In other cases resistance varies (e.g., thermistors). A resistor is a passive two-terminal electrical component that implements electrical resistance as a circuit element. In electronic circuits, resistors are used to reduce current flow, adjust signal levels, to divide voltages, bias active elements, and terminate transmission lines, among other uses. High-power resistors that can dissipate many watts of electrical power as heat, may be used as part of motor controls, in power distribution systems, or as test loads for generators. Fixed resistors have resistances that only change slightly with temperature, time or operating voltage. Variable resistors can be used to adjust circuit elements, or as sensing devices for heat, light, humidity, force, or chemical activity. A thermistor is a type of resistor whose resistance is dependent on temperature, more so than in standard resistors. The word is a portmanteau of thermal and resistor. Thermistors are widely used as inrush current limiters, temperature sensors, self-resetting overcurrent protectors, and self-regulating heating elements. A vowel of the prefixed units kiloohm and megaohm is commonly omitted, producing kilohm and megohm. [2] In alternating current circuits, electrical impedance is also measured in ohms. Electrical impedance is the measure of the opposition that a circuit presents to a current when a voltage is applied. The term complex impedance may be used interchangeably. Following the 2019 redefinition of the SI base units, in which the ampere and the kilogram were redefined in terms of fundamental constants, the ohm is now also defined in terms of these constants. ## Conversions The siemens (symbol: S) is the SI derived unit of electric conductance and admittance, also known as the mho (ohm spelled backwards, symbol is ℧); it is the reciprocal of resistance in ohms (Ω). ## Power as a function of resistance The power dissipated by a resistor may be calculated from its resistance, and the voltage or current involved. The formula is a combination of Ohm's law and Joule's law: ${\displaystyle P=V\cdot I={\frac {V^{2}}{R}}=I^{2}\cdot R}$ where: P is the power R is the resistance V is the voltage across the resistor I is the current through the resistor A linear resistor has a constant resistance value over all applied voltages or currents; many practical resistors are linear over a useful range of currents. Non-linear resistors have a value that may vary depending on the applied voltage (or current). Where alternating current is applied to the circuit (or where the resistance value is a function of time), the relation above is true at any instant but calculation of average power over an interval of time requires integration of "instantaneous" power over that interval. Since the ohm belongs to a coherent system of units, when each of these quantities has its corresponding SI unit (watt for P, ohm for R, volt for V and ampere for I, which are related as in § Definition, this formula remains valid numerically when these units are used (and thought of as being cancelled or omitted). ## History The rapid rise of electrotechnology in the last half of the 19th century created a demand for a rational, coherent, consistent, and international system of units for electrical quantities. Telegraphers and other early users of electricity in the 19th century needed a practical standard unit of measurement for resistance. Resistance was often expressed as a multiple of the resistance of a standard length of telegraph wires; different agencies used different bases for a standard, so units were not readily interchangeable. Electrical units so defined were not a coherent system with the units for energy, mass, length, and time, requiring conversion factors to be used in calculations relating energy or power to resistance. [3] Two different methods of establishing a system of electrical units can be chosen. Various artifacts, such as a length of wire or a standard electrochemical cell, could be specified as producing defined quantities for resistance, voltage, and so on. Alternatively, the electrical units can be related to the mechanical units by defining, for example, a unit of current that gives a specified force between two wires, or a unit of charge that gives a unit of force between two unit charges. This latter method ensures coherence with the units of energy. Defining a unit for resistance that is coherent with units of energy and time in effect also requires defining units for potential and current. It is desirable that one unit of electrical potential will force one unit of electric current through one unit of electrical resistance, doing one unit of work in one unit of time, otherwise all electrical calculations will require conversion factors. Since so-called "absolute" units of charge and current are expressed as combinations of units of mass, length, and time, dimensional analysis of the relations between potential, current, and resistance show that resistance is expressed in units of length per time — a velocity. Some early definitions of a unit of resistance, for example, defined a unit resistance as one quadrant of the Earth per second. The absolute-units system related magnetic and electrostatic quantities to metric base units of mass, time, and length. These units had the great advantage of simplifying the equations used in the solution of electromagnetic problems, and eliminated conversion factors in calculations about electrical quantities. However, the centimeter-gram-second, CGS, units turned out to have impractical sizes for practical measurements. Various artifact standards were proposed as the definition of the unit of resistance. In 1860 Werner Siemens (1816–1892) published a suggestion for a reproducible resistance standard in Poggendorffs Annalen der Physik und Chemie . [4] He proposed a column of pure mercury, of one square millimeter cross section, one metre long: Siemens mercury unit. However, this unit was not coherent with other units. One proposal was to devise a unit based on a mercury column that would be coherent – in effect, adjusting the length to make the resistance one ohm. Not all users of units had the resources to carry out metrology experiments to the required precision, so working standards notionally based on the physical definition were required. In 1861, Latimer Clark (1822–1898) and Sir Charles Bright (1832–1888) presented a paper at the British Association for the Advancement of Science meeting [5] suggesting that standards for electrical units be established and suggesting names for these units derived from eminent philosophers, 'Ohma', 'Farad' and 'Volt'. The BAAS in 1861 appointed a committee including Maxwell and Thomson to report upon standards of electrical resistance. [6] Their objectives were to devise a unit that was of convenient size, part of a complete system for electrical measurements, coherent with the units for energy, stable, reproducible and based on the French metrical system. [7] In the third report of the committee, 1864, the resistance unit is referred to as "B.A. unit, or Ohmad". [8] By 1867 the unit is referred to as simply ohm. [9] The B.A. ohm was intended to be 109 CGS units but owing to an error in calculations the definition was 1.3% too small. The error was significant for preparation of working standards. On September 21, 1881 the Congrès internationale des électriciens (international conference of electricians) defined a practical unit of ohm for the resistance, based on CGS units, using a mercury column 1 sq. mm. in cross-section, approximately 104.9 cm in length at 0 °C, [10] similar to the apparatus suggested by Siemens. A legal ohm, a reproducible standard, was defined by the international conference of electricians at Paris in 1884[ citation needed ] as the resistance of a mercury column of specified weight and 106 cm long; this was a compromise value between the B. A. unit (equivalent to 104.7 cm), the Siemens unit (100 cm by definition), and the CGS unit. Although called "legal", this standard was not adopted by any national legislation. The "international" ohm was recommended by unanimous resolution at the International Electrical Congress 1893 in Chicago. [11] The unit was based upon the ohm equal to 109 units of resistance of the C.G.S. system of electromagnetic units. The international ohm is represented by the resistance offered to an unvarying electric current in a mercury column of constant cross-sectional area 106.3 cm long of mass 14.4521 grams and 0 °C. This definition became the basis for the legal definition of the ohm in several countries. In 1908, this definition was adopted by scientific representatives from several countries at the International Conference on Electric Units and Standards in London. [11] The mercury column standard was maintained until the 1948 General Conference on Weights and Measures, at which the ohm was redefined in absolute terms instead of as an artifact standard. By the end of the 19th century, units were well understood and consistent. Definitions would change with little effect on commercial uses of the units. Advances in metrology allowed definitions to be formulated with a high degree of precision and repeatability. ### Historical units of resistance Unit [12] DefinitionValue in B.A. ohmsRemarks Absolute foot/second × 107using imperial units0.3048considered obsolete even in 1884 Thomson's unitusing imperial units0.3202100 million feet/second, considered obsolete even in 1884 Jacobi copper unitA specified copper wire 25 feet long weighing 345 grains0.6367Used in 1850s Weber's absolute unit × 107Based on the metre and the second0.9191 Siemens mercury unit 1860. A column of pure mercury0.9537100 cm and 1 mm2 cross section at 0 °C British Association (B.A.) "ohm" 18631.000Standard coils deposited at Kew Observatory in 1863 [13] Digney, Breguet, Swiss9.266–10.420Iron wire 1 km long and 4 square mm cross section Matthiessen13.59One mile of 1/16 inch diameter pure annealed copper wire at 15.5 °C Varley25.61One mile of special 1/16 inch diameter copper wire German mile57.44A German mile (8,238 yard) of iron wire 1/6th inch diameter Abohm 10−9Electromagnetic absolute unit in centimeter–gram–second units Statohm 8.987551787 × 1011Electrostatic absolute unit in centimeter–gram–second units ## Realization of standards The mercury column method of realizing a physical standard ohm turned out to be difficult to reproduce, owing to the effects of non-constant cross section of the glass tubing. Various resistance coils were constructed by the British Association and others, to serve as physical artifact standards for the unit of resistance. The long-term stability and reproducibility of these artifacts was an ongoing field of research, as the effects of temperature, air pressure, humidity, and time on the standards were detected and analyzed. Artifact standards are still used, but metrology experiments relating accurately-dimensioned inductors and capacitors provided a more fundamental basis for the definition of the ohm. Since 1990 the quantum Hall effect has been used to define the ohm with high precision and repeatability. The quantum Hall experiments are used to check the stability of working standards that have convenient values for comparison. [14] Following the 2019 redefinition of the SI base units, in which the ampere and the kilogram were redefined in terms of fundamental constants, the ohm is now also defined in terms of these constants. ## Symbol The symbol Ω was suggested, because of the similar sound of ohm and omega, by William Henry Preece in 1867. [15] In documents printed before WWII the unit symbol often consisted of the raised lowercase omega (ω), such that 56 Ω was written as 56ω. Historically, some document editing software applications have used the Symbol typeface to render the character Ω. [16] Where the font is not supported, a W is displayed instead ("10 W" instead of "10 Ω", for instance). As W represents the watt, the SI unit of power, this can lead to confusion, making the use of the correct Unicode code point preferable. Where the character set is limited to ASCII, the IEEE 260.1 standard recommends substituting the symbol ohm for Ω. In the electronics industry it is common to use the character R instead of the Ω symbol, thus, a 10 Ω resistor may be represented as 10R. This is the British standard BS 1852 code. It is used in many instances where the value has a decimal place. For example, 5.6 Ω is listed as 5R6. This method avoids overlooking the decimal point, which may not be rendered reliably on components or when duplicating documents. Unicode encodes the symbol as U+2126OHM SIGN, distinct from Greek omega among letterlike symbols, but it is only included for backwards compatibility and the Greek uppercase omega character U+03A9ΩGREEK CAPITAL LETTER OMEGA (HTML &#937; ·&Omega;) is preferred. [17] In DOS and Windows, the alt code ALT 234 may produce the Ω symbol. In Mac OS, ⌥ Opt+Z does the same. ## Notes and references 1. The NIST Guide to the SI: 9.3 Spelling unit names with prefixes reports that IEEE/ASTM SI 10-2002 IEEE/ASTM Standard for Use of the International System of Units (SI): The Modern Metric System states that there are three cases in which the final vowel of an SI prefix is commonly omitted: megohm, kilohm, and hectare, but that in all other cases in which the unit name begins with a vowel, both the final vowel of the prefix and the vowel of the unit name are retained and both are pronounced. 2. Hunt, Bruce J (1994). "The Ohm Is Where the Art Is: British Telegraph Engineers and the Development of Electrical Standards" (PDF). Osiris. 2nd. 9: 48–63. doi:10.1086/368729. Archived from the original (PDF) on 8 March 2014. Retrieved 27 February 2014. 3. Werner Siemens (1860), "Vorschlag eines reproducirbaren Widerstandsmaaßes", Annalen der Physik und Chemie (in German), 186 (5), pp. 1–20, Bibcode:1860AnP...186....1S, doi:10.1002/andp.18601860502 4. Clark, Latimer; Bright, Sir Charles (1861-11-09). "Measurement of Electrical Quantities and Resistance". The Electrician . 1 (1): 3–4. Retrieved 27 February 2014. 5. Williamson, Professor A; Wheatstone, Professor C; Thomson, Professor W; Miller, Professor WH; Matthiessen, Dr. A; Jenkin, Mr. Fleeming (September 1862). Provisional Report of the Committee appointed by the British Association on Standards of Electrical Resistance. Thirty-second Meeting of the British Association for the Advancement of Science. London: John Murray. pp. 125–163. Retrieved 2014-02-27. 6. Williamson, Professor A; Wheatstone, Professor C; Thomson, Professor W; Miller, Professor WH; Matthiessen, Dr. A; Jenkin, Mr. Fleeming; Bright, Sir Charles; Maxwell, Professor; Siemens, Mr. CW; Stewart, Mr. Balfour; Joule, Dr.; Varley, Mr. CF (September 1864). Report of the Committee on Standards of Electrical Resistance. Thirty-fourth Meeting of the British Association for the Advancement of Science. London: John Murray. p. Foldout facing page 349. Retrieved 2014-02-27. 7. Williamson, Professor A; Wheatstone, Professor C; Thomson, Professor W; Miller, Professor WH; Matthiessen, Dr. A; Jenkin, Mr. Fleeming; Bright, Sir Charles; Maxwell, Professor; Siemens, Mr. CW; Stewart, Mr. Balfour; Varley, Mr. CF; Foster, Professor GC; Clark, Mr. Latimer; Forbes, Mr. D.; Hockin, Mr. Charles; Joule, Dr. (September 1867). Report of the Committee on Standards of Electrical Resistance. Thirty-seventh Meeting of the British Association for the Advancement of Science. London: John Murray. p. 488. Retrieved 2014-02-27. 8. "System of measurement units". Engineering and Technology History Wiki. Retrieved 13 April 2018. 9. "Units, Physical". Encyclopædia Britannica. 27 (11th ed.). 1911. p. 742. 10. Gordon Wigan (trans. and ed.), Electrician's Pocket Book, Cassel and Company, London, 1884 11. R. Dzuiba and others, Stability of Double-Walled Maganin Resistors in NIST Special Publication Proceedings of SPIE--the International Society for Optical Engineering, The Institute, 1988 pages 63-64 12. Preece, William Henry (1867), "The B.A. unit for electrical measurements", Philosophical Magazine , 33, p. 397, retrieved 26 February 2017 13. E.g. recommended in HTML 4.01: "HTML 4.01 Specification". W3C. 1998. Section 24.1 "Introduction to character entity references". Retrieved 2018-11-22. 14. Excerpts from The Unicode Standard, Version 4.0 , accessed 11 October 2006
proofpile-shard-0030-168
{ "provenance": "003.jsonl.gz:169" }
## Books by Independent Authors ### Chapter VI. Compact and Locally Compact Groups #### Abstract This chapter investigates several ways that groups play a role in real analysis. For the most part the groups in question have a locally compact Hausdorff topology. Section 1 introduces topological groups, their quotient spaces, and continuous group actions. Topological groups are groups that are topological spaces in such a way that multiplication and inversion are continuous. Their quotient spaces by subgroups are of interest when they are Hausdorff, and this is the case when the subgroups are closed. Many examples are given, and elementary properties are established for topological groups and their quotients by closed subgroups. Sections 2–4 investigate translation-invariant regular Borel measures on locally compact groups and invariant measures on their quotient spaces. Section 2 deals with existence and uniqueness in the group case. A left Haar measure on a locally compact group $G$ is a nonzero regular Borel measure invariant under left translations, and right Haar measures are defined similarly. The theorem is that left and right Haar measures exist on $G$, and each kind is unique up to a scalar factor. Section 3 addresses the relationship between left Haar measures and right Haar measures, which do not necessarily coincide. The relationship is captured by the modular function, which is a certain continuous homomorphism of the group into the multiplicative group of positive reals. The modular function plays a role in constructing Haar measures for complicated groups out of Haar measures for subgroups. Of special interest are “unimodular” locally compact groups $G$, i.e., those for which the left Haar measures coincide with the right Haar measures. Every compact group, and of course every locally compact abelian group, is unimodular. Section 4 concerns translation-invariant measures on quotient spaces $G/H$. For the setting in which $G$ is a locally compact group and $H$ is a closed subgroup, the theorem is that $G/H$ has a nonzero regular Borel measure invariant under the action of $G$ if and only if the restriction to $H$ of the modular function of $G$ coincides with the modular function of $H$. In this case the $G$ invariant measure is unique up to a scalar factor. Section 5 introduces convolution on unimodular locally compact groups $G$. Familiar results valid for the additive group of Euclidean space, such as those concerning convolution of functions in various $L^{p}$ classes, extend to be valid for such groups $G$. Sections 6–8 concern the representation theory of compact groups. Section 6 develops the elementary theory of finite-dimensional representations and includes some examples, Schur orthogonality, and properties of characters. Section 7 contains the Peter–Weyl Theorem, giving an orthonormal basis of $L^{2}$ in terms of irreducible representations and concluding with an Approximation Theorem showing how to approximate continuous functions on a compact group by trigonometric polynomials. Section 8 shows that infinite-dimensional unitary representations of compact groups decompose canonically according to the irreducible finite-dimensional representations of the group. An example is given to show how this theorem may be used to take advantage of the symmetry in analyzing a bounded operator that commutes with a compact group of unitary operators. The same principle applies in analyzing partial differential operators. #### Chapter information Source Anthony W. Knapp, Advanced Real Analysis, Digital Second Edition, Corrected version (East Setauket, NY: Anthony W. Knapp, 2017), 212-274 Dates First available in Project Euclid: 21 May 2018 Permanent link to this document https://projecteuclid.org/euclid.bia/1526871320 Digital Object Identifier doi:10.3792/euclid/9781429799911-6 Rights
proofpile-shard-0030-169
{ "provenance": "003.jsonl.gz:170" }
/ hep-ex CERN-PH-EP-2015-070 Search for heavy Majorana neutrinos with the ATLAS detector in pp collisions at $\sqrt{s} = 8$ TeV Pages: 28 Abstract: A search for heavy Majorana neutrinos in events containing a pair of high-$p_{\mathrm{T}}$ leptons of the same charge and high-$p_{\mathrm{T}}$ jets is presented. The search uses $20.3 \mathrm{fb}^{-1}$ of $pp$ collision data collected with the ATLAS detector at the CERN Large Hadron Collider with a centre-of-mass energy of $\sqrt{s} = 8$ TeV. The data are found to be consistent with the background-only hypothesis based on the Standard Model expectation. In the context of a Type-I seesaw mechanism, limits are set on the production cross-section times branching ratio for production of heavy Majorana neutrinos in the mass range between 100 and 500 GeV. The limits are subsequently interpreted as limits on the mixing between the heavy Majorana neutrinos and the Standard Model neutrinos. In the context of a left-right symmetric model, limits on the production cross-section times branching ratio are set with respect to the masses of heavy Majorana neutrinos and heavy gauge bosons $W_{\mathrm{R}}$ and $Z'$. Note: *Temporary entry*; 28 pages plus author list (44 pages total), 11 figures, 6 tables, submitted to JHEP, all figures including auxiliary figures are available at http://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/EXOT-2012-24/ Total numbers of views: 582 Numbers of unique views: 358
proofpile-shard-0030-170
{ "provenance": "003.jsonl.gz:171" }
# Choose the right answer from the four alternatives given below. (i) Migrations change the number 77 views Choose the right answer from the four alternatives given below. (i) Migrations change the number, distribution and composition of the population in (a) The area of departure (b) Both the area of departure and arrival (c) The area of arrival (d) None of the above by (128k points) selected (b) Both the area of departure and arrival
proofpile-shard-0030-171
{ "provenance": "003.jsonl.gz:172" }
## Archive for the ‘Book Review’ Category ### Clojure High Performance Programming My inaugural book-review is for Clojure High Performance Programming, by Shantanu Kumar. Unfortunately, for my first review, I cannot be that positive about the book. I found it rather disorganised and chaotic. Concepts are introduced and then briefly discussed, occasionally cross-referenced later. It is often not clear what the relevance of this to programming in Clojure. For instance, we are introduced to branch prediction in modern processors. Not something I know about, so perhaps useful to understand. But it’s not explained why this would be useful to know about. Are there any code examples that show how branch prediction can impact on the performance of my code? Likewise, different forms of CPU interconnect. Or L1,2 and 3 caches. I have the general impression that, as a Clojure programming I am a long way from the CPU; there is not really any sample code given showing how the size of the caches can really impact on my performance. Worse those issues which can impact on the Clojure programmer are scantily covered. For example, the following (decompiled) java code is shown: public Object invoke(Object x, Object y){ x = null; y = null; return Numbers.multiply(x,y); } Partly, this demonstrates auto-boxing, but as a Java programmer, the code makes no sense, as it calls Numbers.multiple(null,null). It’s never explained how or why this makes sense (Clojure is clearing locals, something which works in byte-code, but cannot be translated into Java source). Type hinting (which I have just had the joy of adding to tawny-owl) is similarly dealt with in a little under 2 pages, despite having a potentially large impact on the performance of (some) Clojure libraries. In short, as a series of vignettes about different aspects of performance it’s interesting enough; but the whole is no greater than the parts, and it left me with little increased knowlege of Clojure, nor how to make it perform well. ## Bibliography ### Book Reviews This year I have been on a bit of a mission. I decided that having being here for 8 years, I would actually use the library. So I have started off by requesting books and reading them. It’s been a while since I have regularly read books and it’s been quite an interesting experience. I’ve remembered that reading tech books is quite a reflective process, away from the computer. It’s a less stressful, although perhaps more time consuming experience than hunting through the web, reading documentation or code until you understand what ever it is you are reading about. I don’t know how long this trend will continue, but while it is, I thought I would write some short book reviews on the books that I have read; as normal, mostly for my own purposes; like many lecturers I get asked for book recommendations, so recording my impressions seems sensible.
proofpile-shard-0030-172
{ "provenance": "003.jsonl.gz:173" }
# Is quantum physics nonlocal? + 4 like - 0 dislike 857 views In which sense can one say that quantum physics is nonlocal (if it is at all)? asked Nov 3, 2015 ## 2 Answers + 2 like - 0 dislike There are two significantly different notions of nonlocality in use - the violation of causal locality giving rise to equal-time causal commutation rules and the violation of Bell inequalities and the like. Let me call the former causal nonlocality and the latter Bell nonlocality. More precisely, causal locality is a condition ensuring that signals, matter, and energy cannot travel faster than light. On the other hand, Bell locality is the assumption that the state of an extended system factors into the states of localized parts of the system. Roughly speaking, this means that complete information about the state of region A and complete information about the state of region B is equivalent to complete information about the union of regions A and B, and this information propagates independently if A and B are disjoint. The condition characterizing Bell locality is satisfied for classical point particles but not for classical coherent waves extending over the union of A and B. The Maxwell equations in vacuum provide examples of the latter, although they satisfy causal locality. Thus causal locality and Bell locality are two essentially different concepts. According to our present knowledge, causal nonlocality is not realized in the universe. If it were, it would wreck the basis of all our subatomic quantum field theory. I nowhere claim (or see a claim of) anything that could support causal nonlocality. On the other hand, Bell nonlocality has been amply demonstrated experimentally for quantum particles (photons, electrons, and even small molecules). It is an intrinsic effect of approximating the analysis experiments whose fundamental description would need quantum field theory by a simpler analysis in terms of a few particle picture. Bell locality applies to particles only and loses its meaning for fields, which are intrinsically Bell nonlocal, even classically. Thus quantum physics is local in the causal sense but nonlocal in the Bell sense. answered Nov 3, 2015 by (15,757 points) edited Nov 22, 2015 I think this needs to be elaborated a little: "Bell locality applies to particles only and loses its meaning for fields, which are intrinsically Bell nonlocal, even classically." I take it you mean a classical stochastic field that has nontrivial fluctuations (and hence has correlations at space-like separation), given that Bell and CHSH inequalities are statistical/probabilistic statements. @PeterMorgan: This has nothing to do with stochastic fields; it is enough that the detector is stochastic. For example, the photo effect is primarily due to the stochastic nature of the detector, and is present in models where the field is classical, with only tiny quantum corrections for ordinary light. To be more specific in the present context, experiments with a classical electromagnetic field are able to violate the Bell inequalities in a similar way as experiments with particles. Your answer raises the question whether general relativity is nonlocal, i.e. whether it allows solutions were neither of your locality conditions are satisfied. @ThomasKlimpel See my answer @ArnoldNeumaier 1. Does your answer imply that the nonlocality required to make hidden variables (entities of physical reality) compatible with experiments violating Bell inequalities is of the same sort as coherent states and those described in your last paragraph? 2. According to your answer, entangled states violate Bell locality, right? + 1 like - 0 dislike I briefly discuss 9 different notions of (non)locality that are used in the literature, namely, causality or Einstenian locality, microcausality, lagrangian locality, gravitational nonlocality, cluster decomposition principle, nonlocal collapse of the wave function, nonlocal states, entanglement, and local entities of physical reality. http://physics.stackexchange.com/questions/34650/definitions-locality-vs-causality/34675#34675 answered Dec 2, 2015 by (885 points) ## Your answer Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsO$\varnothing$erflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
proofpile-shard-0030-173
{ "provenance": "003.jsonl.gz:174" }
, 20.08.2019 00:00, kalpana8955 # Why is the very difficult to predict the year ### Other questions on the subject: Science Science, 18.08.2019 15:00, alikarki2580 Draw a distance-displacement graph for an object in uniform circular motion on a track of radius 14 m. Science, 18.08.2019 22:00, daniya0 Define the following : heart beat ​
proofpile-shard-0030-174
{ "provenance": "003.jsonl.gz:175" }
# Make a list with uniform spacing regardless of the characters and number of text lines, possibly with minipages or a table I am trying to make a list with two columns, where I can used different font sizes and have single- or multi-row columns, while adhering to the page margins and aligning the text properly regardless of what letters are used and how many rows are in each cell. It looked straight-forward to me at first, but then I realized that I was not able to get all of it working at once. ## The following need to work: • Top alignment when different font sizes are used in adjacent cells • The page should fill the margins (as specified by the class, or, as in the example below, explicitly. This implies that small margins around the text in the minipages should not be added, or that it should be compensated for. • The spacing around the minipages should not be different if only one line of text is present compared to when several lines are present. • The spacing should not be different for the minipages at the top compared to all other minipages (this was the case before I added the outer minipage, in which both of the inner minipages are placed in the example below). • It should be possible to right-align part of the text in the minipage (as is done with \hfill in the example below). • All spacing at the top should be for the tallest character of the font used, regardless of the actual characters used. For example: If one minipage contains only the letter 'A', and the minipage next to it only the letter 'a', the 'a' of the second minipage should stay at the same height as it would have, if the page would have contained for example 'al' (the height of a tall letter should always define the top of a line, even if no tall letter was used). Thus, the top of the 'a' should be slightly below the top of the 'A'. • All spacing at the bottom should be for the deepest character of the font used, regardless of the actual characters used. For example: If one minipage contains only the letter 'A', the minipage below it should be rendered at the same position as it would have, if the minipage above would also have contained a downwards-protruding letter, for example 'Ag'. It may be that this is easier to achieve by using a table, but my attempts at getting the alignment correct regardless of whether only short or also tall letters are used and whether only non-downwards-protruding or also downwards-protruding letters are used has been unsuccessful. Please have a look at this previous question of mine (although I have accepted an answer, the alignment later turned out to not handle all of the described situations correctly): Top-aligned table cells with different font sizes and controlled spacing. It is not important to me how this is achieved (minipages, parboxes, tabular, tabu, tabularx, longtable, ...) -- any solution that can get the alignment right is welcome. Ideally, I would also prefer to get rid of all extra space of the minipage, so that I can get the margins to be exactly what they are specified as (the minipages seem to have some extra space added to them, and I have been unable to remove all of it). That is why I cannot simply put two minipages of width 0.5\textwidth, but have to use -15.73111p to compensate for the extra space when I define the second (right-hand side) minipage of each pair (I arrived at that value from compiling and then looking at the warning for the bad box, which specified the amount of oversize). \documentclass[paper=a4,fontsize=11pt]{scrartcl} % KOMA-article class \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[english]{babel} % English language/hyphenation \usepackage{geometry} \newcommand{\entry}[8]{ % #1 Heading/time #2 #3 alignment #4 #5 width #6 horisontal separation #7 vertical separation #8 description \noindent\hangindent=0em\hangafter=0 % Indentation \begin{minipage}[#3]{\dimexpr(#4+#5)} % \begin{minipage}[#2]{\dimexpr(#4)} % \vspace{0pt} % #1 % \par\vspace{0pt} % \end{minipage} % \hspace{#6} % \begin{minipage}[#3]{\dimexpr#5-15.73111pt} % \vspace{0pt} % #8 % \par\vspace{0pt} % \end{minipage} % \par\vspace{0pt} % \end{minipage} % \vspace{#7} % } % \newlength{\lwidth} \newlength{\rwidth} \newlength{\hsep} \newlength{\vsep} \newlength{\spacebox} \settowidth{\spacebox}{88} \newlength{\topm} \newlength{\footm} \newlength{\rightm} \newlength{\leftm} \setlength{\parskip}{0cm} \setlength{\hsep}{1em} \setlength{\vsep}{2ex} \setlength{\topm}{20mm} \setlength{\footm}{\dimexpr(\topm)} \setlength{\rightm}{\dimexpr(3\topm/2*\paperwidth/\paperheight)} \setlength{\leftm}{\rightm} \newgeometry{top=\topm,bottom=\footm,right=\rightm,left=\leftm} \setlength{\lwidth}{2.8cm} \setlength{\rwidth}{\dimexpr(\textwidth-\lwidth-\hsep)} \begin{document} \entry{\Huge\textbf{{BIG}}}{t}{t}{\lwidth}{\rwidth}{\hsep}{\vsep}{Some text here.\hfill Text flush with the right margin.\\Another row\\Another\\Another} \entry{Two\\Lines}{t}{t}{\lwidth}{\rwidth}{\hsep}{\vsep}{A longer text here, so that there will be several rows, gives the proper spacing below it. There must also be downwards-protruding letters (like 'p' and 'g').} \entry{One Line}{t}{t}{\lwidth}{\rwidth}{\hsep}{\vsep}{This one-line entry has unwanted extra space under it.} \entry{Below One Line}{t}{t}{\lwidth}{\rwidth}{\hsep}{\vsep}{This entry is slightly lower than it should, due to the added extra space below the single line.} \entry{Not deep}{t}{t}{\lwidth}{\rwidth}{\hsep}{\vsep}{This entry contains no letters that protrude downwards in the second line. Just letters that are above the base line.} \entry{Below Not deep}{t}{t}{\lwidth}{\rwidth}{\hsep}{\vsep}{This entry ends up to close to the line above, since the line above contains no letters that protrude downwards.} \end{document} - You might need to play with the spacing a bit but I'd start with a much simpler markup, something like \documentclass[paper=a4,fontsize=11pt]{scrartcl} % KOMA-article class \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[english]{babel} % English language/hyphenation \usepackage{geometry} \newlength{\lwidth} \newlength{\rwidth} \newlength{\hsep} \newlength{\vsep} \newlength{\spacebox} \settowidth{\spacebox}{88} \newlength{\topm} \newlength{\footm} \newlength{\rightm} \newlength{\leftm} \setlength{\parskip}{0cm} \setlength{\hsep}{1em} \setlength{\vsep}{2ex} \setlength{\topm}{20mm} \setlength{\footm}{\dimexpr(\topm)} \setlength{\rightm}{\dimexpr(3\topm/2*\paperwidth/\paperheight)} \setlength{\leftm}{\rightm} \newgeometry{top=\topm,bottom=\footm,right=\rightm,left=\leftm} \setlength{\lwidth}{2.8cm} \setlength{\rwidth}{\dimexpr(\textwidth-\lwidth-\hsep)} \def\entr#1{% \raisebox{\dimexpr\ht\strutbox-\height\relax}{\begin{tabular}[t]{@{}l@{}}#1\strut\end{tabular}}} \usepackage{tabularx} \begin{document} \noindent\begin{tabularx}{\textwidth}{ @{} l >{\let\\\newline}X@{}} \entr{\Huge\textbf{BIG}}& Some text here.\hfill Text flush with the right margin.\\Another row\\Another\\Another\tabularnewline \entr{Two\\Lines}&A longer text here, so that there will be several rows, gives the proper spacing below it. There must also be downwards-protruding letters (like 'p' and 'g').\tabularnewline \entr{One Line}&This one-line entry has unwanted extra space under it.\tabularnewline \entr{Below One Line}&This entry is slightly lower than it should, due to the added extra space below the single line.\tabularnewline \entr{Not deep}&This entry contains no letters that protrude downwards in the second line. Just letters that are above the base line.\tabularnewline \entr{Below Not deep}&This entry ends up to close to the line above, since the line above contains no letters that protrude downwards. \end{tabularx} \end{document} - Simple is always preferable, and I think that your approach is better than my minipages. Looking at the result, everything seems to work, with the minor quirk that the top of BIG is not perfectly aligned with the top of the text in the right-hand column. Even applying the \entr command to the right-hand column did not help (and also prevented \hfill from working). It is especially noticeable if the font of the first line of the right-hand column is made bigger. Although your example is definitely better than what I had, is there a way to improve the top alignment? –  hjb981 Mar 25 '13 at 15:46 @hjb981 I added a \strut which basically covers the height of a paren and will be a bit bigger than a B as you said that you didn't want the position to depend on the letters used. You could instead add a \vphantom{Q} if you never want anything bigger than a capital letter or deeper descender than Q... –  David Carlisle Mar 25 '13 at 16:34 Have I understood it correctly that the height of the strutbox \ht\strutbox minus another height \height (what is this) is used as a value for how much to shift the position, and an invisible character \strut (or \vphantom{Q}) is placed next to the text to be typeset, to make sure that the box around the typeset text has the same (minimum) height even if a short letter is used? Using \vphantom{Q} does not align it perfectly, but for the time being I have made a work-around by simply adding 1.5pt to the shift value (for the BIG entry only, the others now have a separate command). –  hjb981 Mar 25 '13 at 17:21 Why is it that if I increase the size of the right-hand column, the left-hand column (the one with the \entr command) does not follow in alignment, and why is it not possible to use \entr in both columns and thus align them? I do not follow exactly what happens when the columns are shifted as suggested in the answer, so please bear with me if I ask about things that might seem obvious. –  hjb981 Mar 25 '13 at 17:25 you could use entr in both columns but I thought you wanted the right column to be justified text but entr is a single column tabular with an l column so has no line breaking. The offset in entry adjusts its height to the height of a strut in the normal font size, if your text in the other column is bigger you'll want a bigger font, easiest would be to have another argument to \entr so you could pass in a font size so that \strut would be re-calculated before the adjustment. –  David Carlisle Mar 25 '13 at 17:32
proofpile-shard-0030-175
{ "provenance": "003.jsonl.gz:176" }
# Overview Some of you might be aware of the Kolakoski Sequence (A000002), a well know self-referential sequence that has the following property: It is a sequence containing only 1's and 2's, and for each group of 1's and twos, if you add up the length of runs, it equals itself, only half the length. In other words, the Kolakoski sequence describes the length of runs in the sequence itself. It is the only sequence that does this except for the same sequence with the initial 1 deleted. (This is only true if you limit yourself to sequences made up of 1s and 2s - Martin Ender) # The Challenge The challenge is, given a list of integers: • Output -1 if the list is NOT a working prefix of the Kolakoski sequence. • Output the number of iterations before the sequence becomes [2]. # The Worked Out Example Using the provided image as an example: [1,2,2,1,1,2,1,2,2,1,2,2,1,1,2,1,1] # Iteration 0 (the input). [1,2,2,1,1,2,1,2,2,1,2] # Iteration 1. [1,2,2,1,1,2,1,1] # Iteration 2. [1,2,2,1,2] # Iteration 3. [1,2,1,1] # Iteration 4. [1,1,2] # Iteration 5. [2,1] # Iteration 6. [1,1] # Iteration 7. [2] # Iteration 8. Therefore, the resultant number is 8 for an input of [1,2,2,1,1,2,1,2,2,1,2,2,1,1,2,1,1]. 9 is also fine if you are 1-indexing. # The Test Suite (You can test with sub-iterations too) ------------------------------------------+--------- Truthy Scenarios | Output ------------------------------------------+--------- [1,1] | 1 or 2 [1,2,2,1,1,2,1,2,2,1] | 6 or 7 [1,2,2,1,1,2,1,2,2,1,2,2,1,1,2,1,1] | 8 or 9 [1,2] | 2 or 3 ------------------------------------------+--------- Falsy Scenarios | Output ------------------------------------------+--------- [4,2,-2,1,0,3928,102904] | -1 or a unique falsy output. [1,1,1] | -1 [2,2,1,1,2,1,2] (Results in [2,3] @ i3) | -1 (Trickiest example) [] | -1 [1] | -1 If you're confused: Truthy: It will eventually reach two without any intermediate step having any elements other than 1 and 2. – Einkorn Enchanter 20 hours ago Falsy: Ending value is not [2]. Intermediate terms contain something other than something of the set [1,2]. A couple other things, see examples. This is , lowest byte-count will be the victor. • Can we use any falsey value instead of just -1? – mbomb007 Jun 28 '17 at 20:05 • What do you mean by "NOT a working prefix of the Kolakoski sequence"? I had assumed you meant the list does not eventually reach [2] until I saw the [2,2,1,1,2,1,2] test case. – ngenisis Jun 28 '17 at 21:42 • @ngenisis It will eventually reach two without any intermediate step having any elements other than 1 and 2. – Wheat Wizard Jun 29 '17 at 0:27 • Might be a good idea to add [1] as a test case. – Emigna Jun 29 '17 at 9:54 • @mbomb007 any distinct value is fine. A positive integer is not fine. If you're 1-indexing 0 is fine. "False" is fine. Erroring is fine. Any non-positive return value is fine, even -129.42910. – Magic Octopus Urn Jun 29 '17 at 15:33 39 bytes saved thanks to Ørjan Johansen import Data.List f[2]=0 f y@(_:_:_)|all(elem[1,2])y=1+f(length<$>group y) Try it online! This errors on bad input. • f (and consequently !) can be shortened a lot by using lazy production + span/length instead of accumulators. Try it online! – Ørjan Johansen Jun 29 '17 at 2:41 • Seem to enter an infinite loop for [1] – Emigna Jun 29 '17 at 9:59 • @Emigna Darn. It costs me 6 bytes to fix it, but I've fixed it. – Wheat Wizard Jun 29 '17 at 14:32 • @ØrjanJohansen That seems like a good tip, but I'm not proficient enough in Haskell to understand whats going on there. If you want you can post it as your own answer but at least as long as I don't know how your solution works, I'm not going to add it to my answer. :) – Wheat Wizard Jun 29 '17 at 14:36 • I then realized this is a case where an import is actually shorter (and also simpler to understand): import Data.List;f l=length<$>group l. (<$> is a synonym for map here.) Also, instead of having two different -1 cases it is shorter to use a @(_:_:_) pattern to force the main case to only match length >=2 lists. Try it online! – Ørjan Johansen Jun 30 '17 at 0:44 # 05AB1E, 22 bytes [Dg2‹#γ€gM2›iX]2QJiNë® Try it online! Explanation [ # start a loop D # duplicate current list g2‹# # break if the length is less than 2 γ # group into runs of consecutive equal elements €g # get length of each run M2›iX # if the maximum run-length is greater than 2, push 1 ] # end loop 2QJi # if the result is a list containing only 2 N # push the iteration counter from the loop ë® # else, push -1 # implicitly output top of stack • Fails for [1,1,2,2,1,2,1,1,2,2,1,2,2,1,1,2,1,1] – Weijun Zhou Mar 24 '18 at 10:04 • @WeijunZhou: Thanks, fixed! – Emigna Mar 24 '18 at 10:50 • You may have forgotten to update the link ... – Weijun Zhou Mar 24 '18 at 10:56 • @WeijunZhou: Indeed I had. Thanks again :) – Emigna Mar 24 '18 at 10:58 # SCALA, 290(282?) chars, 290(282?) bytes It took me sooo loong ... But I'm finally done! with this code : var u=t var v=Array[Int]() var c= -1 var b=1 if(!u.isEmpty){while(u.forall(x=>x==1|x==2)){c+=1 if(u.size>1){var p=u.size-1 for(i<-0 to p){if(b==1){var k=u(i) v:+=(if(i==p)1 else if(u(i+1)==k){b=0 if(p-i>1&&u(i+2)==k)return-1 2}else 1)} else b=1} u=v v=v.take(0)}else if(u(0)==2)return c}} c I don't know if I should count the var u=t into the bytes, considering I do not use t during the algorithm (the copy is just to get a modifyable var instead of the parameter t considered as val - thanks ScaLa). Please tell me if I should count it. Hard enough. Try it online! PS : I was thinking of doing it recursively, but I'll have to pass a counter as a parameter of the true recursive "subfunction" ; this fact makes me declare two functions, and these chars/bytes are nothing but lost. EDIT : I had to change (?) because we're not sure we should take in count [1] case. So here is the modified code : var u=t var v=Array[Int]() var c= -1 var b=1 if(!u.isEmpty){try{t(1)}catch{case _=>return if(t(0)==2)0 else -1} while(u.forall(x=>x==1|x==2)){c+=1 if(u.size>1){var p=u.size-1 for(i<-0 to p){if(b==1){var k=u(i) v:+=(if(i==p)1 else if(u(i+1)==k){b=0 if(p-i>1&&u(i+2)==k)return-1 2}else 1)} else b=1} u=v v=v.take(0)}else if(u(0)==2)return c}} c It's not optimized (I have a duplicate "out" for the same conditions : when I get to [2] and when param is [2] is treated separatedly). NEW COST = 342 (I didn't modify the title on purpose) • Seem to enter an infinite loop for [1] – Emigna Jun 29 '17 at 10:01 • Yep, but as said by the OP (as I understood at least) : "with the initial 1 deleted" and "Output the number of iterations before the sequence becomes [2]" – V. Courtois Jun 29 '17 at 10:03 • To my understanding, [1] never reaches [2] and should thus return -1. – Emigna Jun 29 '17 at 10:06 • I see. So do you think I should put a litte condition at start? Thanks for your advice. – V. Courtois Jun 29 '17 at 10:09 • I don't know scala but I assume you can just modify the loop to stop when the length of the list is smaller than 2. You already seem to have the check that the element is 2 at the end. – Emigna Jun 29 '17 at 10:11 # Jelly, 26252221 20 bytes FQœ-2R¤ ŒgL€µÐĿṖ-LÇ? Try it online! This code actually wasn't working correctly until 20 bytes and I didn't even notice; it was failing on the [2,2] test case. Should work perfectly now. # Jelly, 17 bytes ŒgL€$ÐĿµẎḟ1,2ȯL_2 Try it online! # JavaScript, 146 142 bytes First try in code golfing, it seems that the "return" in the larger function is quite tedious... Also, the checking of b=1 and b=2 takes up some bytes... Here's the code: f=y=>{i=t=!y[0];while(y[1]){r=[];c=j=0;y.map(b=>{t|=b-1&&b-2;if(b-c){if(j>0)r.push(j);c=b;j=0}j++});(y=r).push(j);i++}return t||y[0]-2?-1:0^i} Explanation f=y=>{/*1*/} //function definition //Inside /*1*/: i=t=!y[0]; //initialization //if the first one is 0 or undefined, //set t=1 so that it will return -1 //eventually, otherwise i=0 while(y[1]){/*2*/} //if there are 2+ items, start the loop //Inside /*2*/: r=[];c=j=0; //initialization y.map(b=>{/*3*/}); //another function definition //Inside /*3*/: t|=b-1&&b-2; //if b==1 or b==2, set t=1 so that the //entire function returns -1 if(b-c){if(j>0)r.push(j);c=b;j=0} //if b!=c, and j!=0, then push the //count to the array and reset counter j++ //counting duplicate numbers (y=r).push(j);i++ //push the remaining count to the array //and proceed to another stage return t||y[0]-2?-1:0^i //if the remaining element is not 2, or //t==1 (means falsy), return -1, //otherwise return the counter i Test data (using the given test data) l=[[1,1],[1,2,2,1,1,2,1,2,2,1],[1,2,2,1,1,2,1,2,2,1,2,2,1,1,2,1,1],[1,2],[4,2,-2,1,0,3928,102904],[1,1,1],[2,2,1,1,2,1,2],[]]; console.log(l.map(f)); //Output: (8) [1, 6, 8, 2, -1, -1, -1, -1] Edit 1: 146 -> 142: Revoking my edit on reducing bytes, because this affects the output; and some edit on the last statement • f=a=>{for(i=t=!a[0];a[1];)r=[],c=j=0,a.map(a=>{t|=a-1&&a-2;a-c&&(0<j&&r.push(j),c=a,j=0);j++}),(a=r).push(j),i++;return t||a[0]-2?-1:0^i} saves 5 bytes (for loop instead of while; commas vs braces; && vs if). You can use google's closure compiler (closure-compiler.appspot.com) to get these optimisations done for you – Oki Jun 29 '17 at 12:05 ## JavaScript (ES6), 12712695 80 bytes g=(a,i,p,b=[])=>a.map(x=>3>x&0<x?(x==p?b[0]++:b=[1,...b],p=x):H)==2?i:g(b,~~i+1) 0-indexed. Throws "ReferenceError: X is not defined" and "InternalError: too much recursion" on bad input. ### Test cases g=(a,i,p,b=[])=>a.map(x=>3>x&0<x?(x==p?b[0]++:b=[1,...b],p=x):H)==2?i:g(b,~~i+1) function wrapper(testcase) { try {console.log(g(testcase))} catch(e) { console.log("Error") } } wrapper([1,1]) wrapper([1,2,2,1,1,2,1,2,2,1]) wrapper([1,2,2,1,1,2,1,2,2,1,2,2,1,1,2,1,1]) wrapper([1,2]) wrapper([4,2,-2,1,0,3928,102904]) wrapper([1,1,1]) wrapper([2,2,1,1,2,1,2]) wrapper([]) wrapper([1]) ## Clojure, 110 bytes #(if-not(#{[][1]}%)(loop[c % i 0](if(every? #{1 2}c)(if(=[2]c)i(recur(map count(partition-by + c))(inc i)))))) A basic loop with a pre-check on edge cases. Returns nil for invalid inputs. I did not know (= [2] '(2)) is true :o # Python 2, 146 bytes (function only) f=lambda l,i=0:i if l==[1]else 0if max(l)>2or min(l)<1else f([len(x)+1for x in"".join(v!=l[i+1][0]for i,v in enumerate(l[:-1])).split("T")],i+1) Returns 0 on falsy input (ok since it's 1-indexed). Simply use it like this: print(f([1,2,2,1,1,2,1,2,2,1,2,2,1,1,2,1,1])) ## Mathematica, 82 bytes FixedPointList[#/.{{2}->T,{(1|2)..}:>Length/@Split@#,_->0}&,#]~FirstPosition~T-1& Function which repeatedly replaces {2} with the undefined symbol T, a list of (one or more) 1s and 2s with the next iteration, and anything else with 0 until a fixed point is reached, then returns the FirstPosition of the symbol T in the resulting FixedPointList minus 1. Output is {n} where n is the (1-indexed) number of iterations needed to reach {2} for the truthy case and -1+Missing["NotFound"] for the falsy case. If the output must be n rather than {n}, it costs three more bytes: Position[FixedPointList[#/.{{2}->T,{(1|2)..}:>Length/@Split@#,_->0}&,#],T][[1,1]]-1& # Python 2, 184 163 156 bytes • @Felipe Nardi Batista saved 21 bytes!!!! thanks a lot!!!! • Halvard Hummel saved 7 bytes!! thanks # Python 2, 156 bytes a,c=input(),0 t=a==[] while 1<len(a)and~-t: r,i=[],0 while i<len(a): j=i while[a[j]]==a[i:i+1]:i+=1 r+=[i-j] a=r;c+=1;t=any(x>2for x in a) print~c*t+c Try it online! Explanation: a,c=input(),0 #input and initialize main-counter t=a==[] #set t to 1 if list's empty. while len(a)>1: #loop until list's length is 1. r,i=[],0 #Initialize temp. list and #list-element-pointer while i<len(a): #loop for the element in list j=0 #set consecutive-item-counter to 0 while(i+j)<len(a)and a[i]==a[i+j]:j+=1 #increase the consec.-counter r+=[j];i+=j #add the value to a list, move the #list-element-pointer a=r;c+=1;t=any(x>2for x in a) #update the main list, increase t #the counter, check if any number if t:break; #exceeds 2 (if yes, exit the loop) print[c,-1][t] #print -1 if t or else the #counter's #value # R, 122 bytes a=scan() i=0 f=function(x)if(!all(x%in%c(1,2)))stop() while(length(a)>1){f(a) a=rle(a)$l f(a) i=i+1} if(a==2)i else stop() Passes all test cases. Throws one or more errors otherwise. I hate validity checks; this code could have been so golfed if the inputs were nice; it would be shorter even in case the input were a sequence of 1’s and 2’s, not necessarily a prefix of the Kolakoski sequence. Here, we have to check both the initial vector (otherwise the test case [-2,1]) would have passed) and the resulting vector (otherwise [1,1,1] would have passed). # Python 2, 122 bytes def f(s,c=2,j=0): w=[1] for i in s[1:]:w+=[1]*(i!=s[j]);w[-1]+=i==s[j];j+=1 return(w==[2])*c-({1,2}!=set(s))or f(w,c+1) Try it online! # Python 3, 120 bytes def f(s,c=2,j=0): w=[1] for i in s[1:]:w+=[1]*(i!=s[j]);w[-1]+=i==s[j];j+=1 return(w==[2])*c-({1,2}!={*s})or f(w,c+1) Try it online! # Explanation A new sequence (w) is initialized to store the next iteration of the reduction. A counter (c) is initalized to keep track of the number of iterations. Every item in the original sequence (s) is compared to the previous value. If they are the same, the value of the last item of (w) is increased with 1. If they are different, the sequence (w) is extended with [1]. If w==[2], the counter (c) is returned. Else, if the original sequence (s) contains other items than 1 and 2, a value -1 is returned. If neither is the case, the function is called recursively with the new sequence (w) as (s) and the counter (c) increased by 1. • To save a byte, I'm trying to combine the first two lines into def f(s,c=2,j=0,w=[1]):, but that gives a different result. Could anybody explain why that is? – Jitse Jul 24 '19 at 8:14 • Mutable default arguments will stay mutated – Jo King Jul 24 '19 at 11:02 • @JoKing That makes perfect sense, thanks! – Jitse Jul 24 '19 at 11:05 # Stax, 26 bytes äE⌐+É7∩ΦΓyr╔ßΣ·φƒÇe►ef%I» Run and debug it Tried with a generator, but it seems like a while loop is shorter. # R, 93 92 bytes Edit: -1 byte thanks to Giuseppe f=function(x)if(sum(x|1)<2,(-1)^(sum(x)!=2),if(!all(x%in%1:2),-1,(y=f(rle(x)$l))+(y>0))) Try it online! Returns 1-based number of iterations for truthy input, and -1 for all falsy inputs. This needed quite careful input testing, especially for the last two test cases... Commented: f=function(x) # recursive function with argument x if(sum(x|1)<2, # if there's one (or less) element left (-1)^(sum(x)!=2), # return 1 if it's equal to 2, -1 otherwise if(!all(x%in%1:2), # if any element isn't 1 or 2 -1, # return -1 (y=f( # otherwise recursively call self with rle(x)$l)) # lengths of groups of digits in x +(y>0))) # if result is positive, return result +1 # otherwise return result (which must be -1) # R, 74 66 bytes f=function(x)if(sum(x)==2&x==2,1,if(all(x%in%1:2))f(rle(x)$l)+1) Try it online! ...but then I read the comments more carefully and realised that it's Ok to output nothing, or rubbish, or to error for falsy inputs. That makes it much easier! • in your first answer, length(x) should be sum(x|1), and in your second, I think sum(x)==2 can be shortened to sum(x)<3. – Giuseppe Nov 30 '20 at 15:28 • @Giuseppe - I thought I'd tried both of those... doesn't sum(x|1) fail for zero-length vectors, and sum(x)<3 fail for vectors containing negative elements? – Dominic van Essen Nov 30 '20 at 15:44 • I think the sum(x|1) still works for 0-length vectors, but I guess I hadn't thought about negative elements. – Giuseppe Nov 30 '20 at 15:45 • @Giuseppe - You're right re:sum(x|1)! Thanks! I'd made the mistake of using c() for test case 8 (which returns NULL and fails) instead of numeric() to properly make a zero-length vector. – Dominic van Essen Nov 30 '20 at 15:59 # Jelly, 13 bytes ŒɠƬµFfƑ1,2aL’ Try it online! Returns 0 or -1 for falsey cases. +2 bytes to consistently return -1 ## How it works ŒɠƬµFfƑ1,2aL’ - Main link. Takes a list L on the left Ƭ - Until reaching a fixed-point: Œɠ - Take lengths of runs µ - Call this list of lengths W F - Flatten Ƒ - Is this unchanged after: f - Removing everything except: 1,2 - 1s and 2s? L - Yield the length of W (i.e. number of iterations) a - If unchanged, return the length. Else, return 0 ’ - Decrement # Ruby, 81 77 bytes f=->a,i=1{a[1]&&a-[1,2]==[]?f[a.chunk{|x|x}.map{|x,y|y.size},i+1]:a==[2]?i:0} Try it online! Edit: Saved 4 bytes by converting to recursive lambda. Returns 1-indexed number of iterations or 0 as falsey. Makes use of Ruby enumerable's chunk method, which does exactly what we need - grouping together consecutive runs of identical numbers. The lengths of the runs constitute the array for the next iteration. Keeps iterating while the array is longer than 1 element and no numbers other than 1 and 2 have been encountered. # Pyth, 45 bytes L?||qb]1!lb-{b,1 2_1?q]2b1Z.V0IKy~QhMrQ8*KhbB Try it online! This is probably still golfable. It's definitely golfable if .? worked the way I hoped it would (being an else for the innermost structure instead of the outermost) L?||qb]1!lb-{b,1 2_1?q]2b1Z # A lambda function for testing an iteration of the shortening L # y=lambda b: ? # if qb]1 # b == [1] | !lb # or !len(b) | {b # or b.deduplicate() - ,1 2 # .difference([1,2]): _1 # return -1 ?q]2b1Z # else: return 1 if [2] == b else Z (=0) .V0 # for b in range(0,infinity): IKy~Q # if K:=y(Q := (applies y to old value of Q) hM # map(_[0], rQ8 # run_length_encode(Q)): *Khb # print(K*(b+1)) B # break # Perl 5-p, 71 bytes $_.=$";s/(. )\1*/$&=~y|12||.$"/ge&$.++while/^([12] ){2,}$/;$_=/^2$/*\$. Try it online! 1-indexed. Outputs 0 for falsy. # Scala 3 (compile-time), 460 bytes This may be the longest answer here, but it's also the fastest one here...because it works at compile time. import compiletime.ops.int._ type r[T,P]=T match{case h*:t=>r[t,h*:P]case E=>P} type E=EmptyTuple type F[K,R,L,S,P,I]=K match{case E=>L-S match{case 0=>R match{case E=>0 case _=>F[r[R,E],E,0,0,0,I+1]}case _=>0}case P*:_=>0 case 1*:1*:t=>F[t,2*:R,L+2,S+2,1,I]case 1*:t=>(t,R)match{case(E,E)=>0 case _=>F[t,1*:R,L+1,S+1,1,I]}case 2*:2*:t=>F[t,2*:R,L+2,S+2,2,I]case 2*:t=>(t,R)match{case(E,E)=>I+1 case _=>F[t,1*:R,L+1,S+1,2,I]}case _=>0} type G[K]=F[K,E,0,0,0,0] Try it in online! You can invoke it as G[1 *: 2 *: EmptyTuple] (actually, just G[(1, 2)] should also work, but for whatever reason, it's not working right now. I'll fix it later, though). It's 1-indexed, and it returns 0 for falsy inputs.
proofpile-shard-0030-176
{ "provenance": "003.jsonl.gz:177" }
# History of macroeconomic thought 1st row: Quantity theorist Irving Fisher, John Maynard Keynes, Neo-Keynesian Franco Modigliani 2nd row: Neo-Keynesian Robert Solow, monetarist Milton Friedman, monetarist Anna Schwartz 3rd row: New classical Thomas J. Sargent, new Keynesian Stanley Fischer, real business cycle theorist Edward C. Prescott Macroeconomic theory has its origins in the study of business cycles and monetary theory.123 John Maynard Keynes attacked some of these earlier theories and produced a general theory of the economy that described the whole economy in terms of aggregates instead of looking at individual, microeconomic parts. Keynes attempted to explain unemployment and recessions. He argued that the tendency for people and businesses to hoard cash and avoid investment during a recession invalidated the assumptions of earlier, "classical" economists who thought markets always clear, leaving no surplus of goods and no willing labor left idle.4 A generation of economists following Keynes synthesized his theory with neoclassical microeconomics to form the neoclassical synthesis. Keynesian theory originally omitted a theory of price levels and inflation. Later Keynesians adopted the Phillips curve to model price level changes. Some Keynesian economists opposed the synthesis method of combining Keynes's theory with an equilibrium system and advocated using disequilibrium models instead. Monetarists, led by Milton Friedman, adopted some Keynesian ideas, such as the importance of the demand for money, but argued that Keynesians ignored the money supply's role in inflation.5 Robert Lucas and other new classical macroeconomists criticized Keynesian models that did not work under rational expectations. Lucas also argued that Keynesian empirical models would not be as stable as models based on microeconomic theories. The new classical school culminated in real business cycle theory (RBC). Like classical economic models, RBC models assumed that markets clear and business cycles are driven by changes in technology and supply, not demand. New Keynesians tried to address many of the criticisms. They built models with microfoundations of sticky prices that suggested recessions could still be explained by demand factors because price rigidities stop prices from falling to a market clearing level, leaving a surplus of goods and labor. The new neoclassical synthesis combined elements of both new classical and new Keynesian macroeconomics into a consensus. Other economists avoided the new classical and new Keynesian debate on short-term dynamics and developed the new growth theories of long-run economic growth.6 ## Origins Early monetary theorists Alfred Marshall, Arthur Cecil Pigou, and Keynes were based at University of Cambridge.7 Pigou and Keynes were associated with the constituent King's College (chapel shown above).8 Macroeconomics descends from two areas of research: Business cycle theory and monetary theory.12 Monetary theory dates back to the 16th century and the work of Martín de Azpilcueta, while business cycle analysis dates from the mid 19th.2 Beginning with William Jevons and Clément Juglar in the 1860s,9 economists attempted to describe the cyclical activity of frequent, violent shifts in economic activity.10 The foundation of the National Bureau of Economic Research by Wesley Mitchell in 1920 marked the beginning of a boom in atheoretical statistical models of economic fluctuation that led to the discovery of apparently regular economic patterns like the Kuznets wave.11 Other economists focused more on theory in their business cycle analysis. Most business cycle theories focused on a single factor,10 such as monetary policy or the impact of weather on the largely agricultural economies of the time.9 Business cycle theory was well established by the 1920s. However, work by business cycle theorists such as Dennis Robertson and Ralph Hawtrey had little impact on public policy.12 Their partial equilibrium theories could not capture general equilibrium, where markets interact with each other; in particular, early business cycle theories treated goods markets and financial markets separately.10 Research in these areas used microeconomic methods to explain employment, price level, and interest rates.13 ### Monetary theory Initially, the relationship between price level and output was explained by the quantity theory of money.14 David Hume presented a quantity theory in his 1752 work Of Money.14 The quantity theory viewed the entire economy through Say's law, which stated that whatever is supplied to the market will be sold: that markets always clear.4 In this view, money is neutral and cannot impact the real factors in an economy like output levels. This was consistent with the classical dichotomy view that real aspects of the economy and nominal factors, such as price levels and money supply, can be considered independent from one another.15 For example, adding more money to an economy would be expected only to raise prices, not to create more goods.16 The quantity theory of money dominated macroeconomic theory until the 1930s.14 Two versions were particularly influential, one developed by Irving Fisher in works that included his 1911 The Purchasing Power of Money and another by Cambridge economists over the course of the early 20th century.14 Fisher's version of the quantity theory can be expressed by holding money velocity (the frequency with which a given piece of currency is used in transactions) (V) and real income (Q) constant and allowing money supply (M) and the price level (P) to vary in the equation of exchange:17 $M\cdot V = P\cdot Q$ Most classical theories, including Fisher's, held that velocity was stable and independent of economic activity.18 Cambridge economists, including Keynes, began to challenge this assumption. They developed the Cambridge cash balance theory, which looked at money demand and how it impacted the economy. The Cambridge theory did not assume money demand and supply are always at equilibrium, and it accounted for people holding more cash when the economy sagged. By factoring in the value of holding cash, the Cambridge economists took significant steps toward the concept of liquidity preference that Keynes would later develop.19 Cambridge theory argued that people hold money for two reasons: to facilitate transactions and to maintain liquidity. In later work, Keynes added a third motive, speculation, to his liquidity preference theory and built on it to create his general theory.20 Knut Wicksell proposed a monetary theory centered on interest rates. His analysis used two interest rates: the market interest rate, determined by the banking system, and the real or "natural" interest rate, determined by the rate of return on capital.21 In Wicksell's theory, first published in 1898, cumulative inflation will occur when technical innovation causes the natural rate to rise or when the banking system allows the market rate to fall. Cumulative deflation occurs under the opposite conditions causing the market rate to rise above the natural.2 Wicksell's theory did not produce a direct relationship between the quantity of money and price level. According to Wicksell, money would be created endogenously, without an increase in quantity of hard currency, as long as the natural exceeded the market interest rate . In these conditions, borrowers turn a profit and deposit cash into bank reserves, which expands money supply.22 This can lead to a cumulative process where inflation increases continuously without an expansion in the monetary base.22 Wicksell's work influenced Keynes and the Swedish economists of the Stockholm School.22 ## Keynes's General Theory Keynes (right) with Harry Dexter White, assistant secretary of the U.S. Treasury, at a 1946 International Monetary Fund meeting Modern macroeconomics can be said to have begun with Keynes and the publication of his book The General Theory of Employment, Interest and Money in 1936.23 Keynes expanded on the concept of liquidity preferences and built a general theory of how the economy worked. Keynes's theory was the first to bring together both monetary and real economic factors,10 explain unemployment and recessions, and provide a potential model for achieving economic stability. Keynes contended that economic output is positively correlated with money velocity.24 He explained the relationship via changing liquidity preferences:25 people increase their money holdings in bad economic times, by reducing their spending, further slowing the economy. This paradox of thrift claimed that individual attempts to survive a downturn only worsen it. When the demand for money increases, money velocity slows. A slow down in economic activities means markets might not clear, leaving excess goods to waste and capacity to idle.26 Turning the quantity theory on its head, Keynes argued that market changes shift quantities rather than prices.27 Keynes replaced the assumption of stable velocity with one of a fixed price-level. If spending falls and prices do not, the excess goods reduce the need for workers, increasing unemployment.28 Classical economists had difficulty explaining involuntary unemployment and recessions because they applied Say's Law to the labor market and expected that all those willing to work at the prevailing wage would be employed.29 In Keynes's model, employment and output are driven by aggregate demand, the sum of consumption and investment.30 Investment fluctuates more than consumption, depending on changes in factors including expectations, "animal spirits", and interest rates.30 Keynes argued that fiscal policy could compensate for this volatility. During downturns, government could increase spending to purchase excess and goods and employ idle labor.31 Moreover, a multiplier effect increases the effect of this direct spending since newly employed workers would spend their income, which would percolate through the economy, while firms would invest to respond to the increase in demand.25 Keynes's prescription for strong public investment had ties to his interest in uncertainty.32 Keynes had given a unique perspective on the statistical inference in A Treatise on Probability, written in 1921, years before his major economic works.33 Keynes thought strong public investment and fiscal policy would counter the negative impacts the uncertainty of economic fluctuations can have on the economy.32 While Keynes's successors paid little attention to the probabilistic aspects of his work, uncertainty may have played a central part in the investment and liquidity preference aspects of the General Theory.32 The exact meaning of Keynes's work has been long debated.34 Even the interpretation of Keynes's policy prescription for unemployment, one of the more explicit parts of The General Theory, has been the subject of debates. Economists and scholars debate whether Keynes intended his advice to be a major policy shift to address a serious problem or a moderately conservative solution to deal with a minor issue.34 ## Keynes's successors Keynes's successors debated the exact formulations, mechanisms, and consequences of the Keynesian model. One group emerged representing the "orthodox" interpretation of Keynes; They combined classical microeconomics with Keynesian thought to produce the "neoclassical synthesis."35 Two camps of Keynesians were critical of the synthesis interpretation of Keynes. Some of these Keynesian economists focused on the disequilibrium aspects of Keynes's work while the other group took a fundamentalist stance on Keynes and began the heterodox Post Keynesian tradition.36 ### Neoclassical synthesis The generation of economists following Keynes, Neo-Keynesians, created the "neoclassical synthesis" by combining Keynes's macroeconomics with neoclassical microeconomics.37 Neo-Keynesians dealt with two microeconomic issues: First, providing foundations for aspects of Keynesian theory such as consumption and investment, and, second, combining Keynesian macroeconomics with general equilibrium theory.38 (In general equilibrium theory, individual markets interact with one another and an equilibrium price exists if there is perfect competition, no externalities, and perfect information.)3539 Paul Samuelson's Foundations of Economic Analysis (1947) provided much of the microeconomic basis for the synthesis.37 Samuelson's work set the pattern for the methodology used by Neo-Keynesians: economic theories expressed in formal, mathematical models.40 While Keynes's theories prevailed in this period, his successors largely abandoned his informal methodology in favor of Samuelson's.41 The neoclassical synthesis dominated economics from the 1940s until the early 1970s.42 By the mid-1950s, the vast majority of economists had ceased debating Keynesianism and accepted the synthesis view;43 however, room for disagreement remained.44 The synthesis attributed problems with market clearing to sticky prices that failed to adjust to changes in supply and demand.45 Another group of Keynesian economists focused on disequilibrium economics and tried to reconcile the concept of equilibrium with the absence of market clearing.46 ### Neo-Keynesian models IS/LM chart with an upward shift in the IS curve. The chart illustrates how a shift in the IS curve, caused by factors like increased government spending or private investment, will lead to higher output (Y) and increased interest rates (i). In 1937 John Hicksa published an article that incorporated Keynes's thought into a general equilibrium framework47 where the markets for goods and money met in an overall equilibrium.48 Hick's IS/LM (Investment-Savings/Liquidity preference-Money supply) model became the basis for decades of theorizing and policy analysis into the 1960s.49 The model represents the goods market with the IS curve, a set of points representing equilibrium in investment and savings. The money market equilibrium is represented with the LM curve, a set of points representing the equilibrium in supply and demand for money. The intersection of the curves identifies an aggregate equilibrium in the economy50 where there are unique equilibrium values for interest rates and economic output.51 Other economists built on the IS/LM framework. Notably, in 1944, Franco Modiglianib added a labor market. Modigliani's model represented the economy as a system with general equilibrium across the interconnected markets for labor, finance, and goods,47 and it explained unemployment with rigid nominal wages.52 Growth had been of interest to classical economists like Adam Smith, but work tapered off during the 19th and early 20th century marginalist revolution when researched focused on microeconomics.53 The study of growth revived when Neo-Keynesian economists Roy Harrod and Evsey Domar independently developed the Harrod-Domar model,54 an extension of Keynes's theory to the long-run, an area Keynes had not looked at himself.55 Their models combined Keynes's multiplier with an accelerator model of investment,56 and produced the simple result that growth equaled the savings rate divided by the capital output ratio (the amount of capital divided by the amount of output).57 The Harrod-Domar model dominated growth theory until Robert Solowc and Trevor Swand independently developed neoclassical growth models in 1956.54 The Harrod-Domar model faced a major weakness since it suggested an unstable, "knife's edge" growth equilibrium. For the model to stay at equilibrium, both capital and labor would have to grow at the same rate; otherwise, the economy could spiral downward with lower output and increasing unemployment. This model did not match empirical data where observed swings in output were much less dramatic than those predicted.58 Solow and Swan produced a more empirically appealing model with "balanced growth" based on the substitution of labor and capital in production.59 The Solow and Swan suggested that increased savings could only temporarily increase growth and only technological improvements could increase growth in the long-run.60 After Solow and Swan, growth research tapered off with little or no research on growth from 1970 until 1985.54 Economists incorporated the theoretical work from the synthesis into large-scale macroeconometric models that combined individual equations for factors such as consumption, investment, and money demand61 with empirically observed data.62 This line of research reached its height with the MIT-Penn-Social Science Research Council (MPS) model developed by Modigliani and his collaborators.61 MPS combined IS/LM with other aspects of the synthesis including Robert Solow's growth model63 and the Phillips curve relation between inflation and output.64 Both large-scale models and the Phillips curve became targets for critics of the synthesis. ### Phillips curve The US economy in the 1960s followed the Phillips curve, a correlation between inflation and unemployment. Keynes did not lay out an explicit theory of price level.65 Early Keynesian models assumed wage and other price levels were fixed.66 These assumptions caused little concern in the 1950s when inflation was stable, but by the mid-1960s inflation increased and became an issue for macroeconomic models.67 In 1958 A.W. Phillipse set the basis for a price level theory when he made the empirical observation that inflation and unemployment seemed to be inversely related. In 1960 Richard Lipseyf provided the first theoretical explanation of this correlation. Generally Keynesian explanations of the curve held that excess demand drove high inflation and low unemployment while an output gap raised unemployment and depressed prices.68 In the late 1960s and early 1970s, the Phillips curve faced attacks on both empirical and theoretical fronts. The presumed trade-off between output and inflation represented by the curve was the weakest part of the Keynesian system.69 ### Disequilibrium macroeconomics Some Keynesians placed less importance on price rigidities and continued to emphasize uncertainty, imperfect competition, and other possible sources of business cycles and unemployment. This line of research led to the development of disequilibrium models. In the neoclassical synthesis, equilibrium models were the rule. In these models, rigid wages modeled unemployment at equilibria. These models were challenged by Don Patinkin,70 Robert W. Clower (1965)g and Axel Leijonhufvud (1968)h focused on the role of disequilibrium.71 Clower and Leijonhufvud argued that disequilibrium formed a fundamental part of Keynes's theory and deserved greater attention.72 Robert Barro and Herschel Grossman formulated general disequilibrium modelsi in which individual markets were locked into prices before there was a general equilibrium. These markets produced "false prices" resulting in disequilibrium.73 Soon after the work of Barro and Grossman, disequilibrium models fell out of favor in the United States,747576 and Barro abandoned Keynesianism and adopted new classical, market-clearing hypotheses.77 Diagram based on Malinvaud's typology of unemployment shows curves for equilibrium in the goods and labor markets given wage and price levels. Walrasian equilibrium is achieved when both markets are at equilibrium. According to Malinvaud the economy is usually in a state of either Keynesian unemployment, with excess supply of goods and labor, or classical unemployment, with excess supply of labor and excess demand for goods.78 In France, Jean-Pascal Bénassy (1975)j and Yves Younès (1975)k studied macroeconomic models with fixed prices. Disequilibrium economics received greater research as mass unemployment returned to Western Europe in the 1970s. European economists such as Edmond Malinvaud and Jacques Drèze expanded on the disequilibrium tradition and worked to explain price rigidity instead of simply assuming it.79 Malinvaud (1977)l used disequilibrium analysis to develop a theory of unemployment.80 He argued that disequilibrium in the labor and goods markets could lead to rationing of goods and labor, leading to unemployment.81 Malinvaud adopted a fixprice framework and argued that pricing would be rigid in modern, industrial prices compared to the relatively flexible pricing systems of raw goods that dominate agricultural economies.81 Prices are fixed and only quantities adjust.82 Malinvaud considers an equilibrium state in classical and Keynesian unemployment as most likely.83 Work in the neoclassical tradition is confined as a special case of Malinvaud's typology, the Walrasian equilibrium. In Malinvaud's theory, reaching the Walrasian equilibrium case is almost impossible to achieve given the nature of industrial pricing.84 ## Monetarism Milton Friedman developed an alternative to Keynesian macroeconomics eventually labeled monetarism. Generally monetarism is the idea that the supply of money matters for the macroeconomy.85 When monetarism emerged in the 1950s and 1960s, Keynesians neglected the role money played in inflation and the business cycle, and monetarism directly challenged those points.5 ### Criticizing and augmenting the Phillips curve The Phillips curve appeared to reflect a clear, inverse relationship between inflation and output. The curve broke down in the 1970s as economies suffered simultaneous economic stagnation and inflation known as stagflation. The empirical implosion of the Phillips curve followed attacks mounted on theoretical grounds by Friedman and Edmund Phelps. Phelps, although not a monetarist, argued that only unexpected inflation or deflation impacted employment. Variations of Phelps's "expectations-augmented Phillips curve" became standard tools. Friedman and Phelps used models with no long-run trade-off between inflation and unemployment. Instead of the Phillips curve they used models based on the natural rate of unemployment where expansionary monetary policy can only temporarily shift unemployment below the natural rate. Eventually, firms will adjust their prices and wages for inflation based on real factors, ignoring nominal changes from monetary policy. The expansionary boost will be wiped out.86 ### Importance of money Anna Schwartz collaborated with Friedman to produce one of monetarism's major works: A Monetary History of the United States (1963) that linked money supply to the business cycle.87 The Keynesians of the 1950s and 60s had adopted the view that monetary policy does not impact aggregate output or the business cycle. They based this belief on evidence that, during the Great Depression, interest rates had been extremely low but output remained depressed.88 Friedman and Schwartz argued that Keynesians missed the relationship by only looking at nominal rates and neglecting the role inflation plays in real interest rates, which had been high during much of the Depression and monetary policy had effectively been contractionary.89 Friedman developed his own quantity theory of money that referred to Irving Fisher's but inherited much from Keynes.90 Friedman's 1956 "The Quantity Theory of Money: A Restatement"m incorporated Keynes's demand for money and liquidity preference into an equation similar to the classical equation of exchange.91 Friedman's updated quantity theory also allowed for the possibility of using monetary or fiscal policy to remedy a major downturn.92 Friedman broke with Keynes by arguing that money demand is relatively stable—even during a downturn.91 Friedman and other monetarists argued that "fine-tuning" through fiscal and monetary policy is counterproductive. They found money demand to be stable even during fiscal policy shifts,93 and that both fiscal and monetary policies suffer from lags that made them too slow to prevent mild downturns.94 ### Prominence and decline Money velocity had been stable and grew consistently until around 1980 (green). After 1980 (blue), money velocity became erratic and the monetarist assumption of stable money velocity was called into question.95 Monetarism attracted the attention of policy makers in the late 1970s and 1980s. Friedman and Phelps's version of the Phillips curve performed better during stagflation and gave monetarism a boost in credibility.96 By the mid-1970s monetarism had become the new orthodoxy in macroeconomics,97 and by the late–1970s central banks in the United Kingdom and United States had largely adopted a monetarist policy of targeting money supply instead of interest rates when setting policy.98 However, targeting monetary aggregates proved difficult for central banks because of measurement difficulties.99 Monetarism faced a major test when Paul Volcker took over the Federal Reserve Chairmanship in 1979. Volcker tightened the money supply and brought inflation down, creating a severe recession in the process. The recession lessened monetarism's popularity, but clearly demonstrated the importance of money supply in the economy.5 Monetarism became less credible when once-stable money velocity defied monetarist predictions and began to move erratically in the United States during the early 1980s.95 Monetarist methods of single-equation models and non-statistical analysis of plotted data also lost out to the simultaneous-equation modeling favored by Keynesians.100 Monetarism's policies and method of analysis lost influence among central bankers and academics, but its core tenets of the long-run neutrality of money (increases in money supply cannot have long-term effects on real variables, such as output) and use of monetary policy for stabilization became a part of the macroeconomic mainstream even among Keynesians.599 ## New classical economics Much of new classical research was conducted at the University of Chicago. Following monetarism, a further challenge to the Keynesian paradigm came in the mid-1970s from the closely related school of "new classical economics." New classical economics and monetarism share a basis in classical economics,101 and monetarism has been labeled as the "first wave" of new classical economics.102 However, the schools had clear differences. New classical economists did not share the monetarist belief that monetary policy could systematically impact the economy.103 They also broke with Keynesian economic theory completely while the monetarists built on Keynesian ideas.104 While ignoring Keynesian theory, new classical economists did share the Keynesian focus on explaining short-run fluctuations. New classical economists replaced monetarists as Keynes's primary opponents and changed the primary debate in macroeconomics from whether to look at short-run fluctuations to whether macroeconomic models should be grounded in microeconomic theories.105 Like monetarism, new classical economics was rooted at the University of Chicago, principally with Robert Lucas. Other leaders in the development of new classical economics include Finn Kydland at Carnegie Mellon, Edward Prescott at University of Minnesota, and Robert Barro at Harvard. New classical economists wrote that earlier macroeconomic theory was based only tenuously on microeconomic theory and described its efforts as providing "microeconomic foundations for macroeconomics." New classical economists also introduced rational expectations, and they argued that governments had little ability to stabilize the economy given the rational expectations of economic agents. Most controversially,106 new classical economists revived the market-clearing assumption, assuming both that prices be flexible and that the market should be modeled at equilibrium.106 ### Rational expectations and policy irrelevance John Muth first proposed rational expectations when he criticized the cobweb model (example above) of agricultural prices. Muth showed that agents making based on rational expectations would be more successful than those who made their estimates based on adaptive expectations, which could lead to the cobweb situation above where decisions about producing quantities (Q) lead to prices (P) spiraling out of control away from the equilibrium of supply (S) and demand (D).107108 Keynesians and monetarists recognized that people based their economic decisions on expectations about the future. However, until the 1970s, most models relied on adaptive expectations, which assumed that expectations were based on an average of past trends.109 For example, if inflation averaged 4% over a period, economic agents were assumed to expect 4% inflation the following year.109 In 1972 Lucas,n influenced by a 1961 agricultural economics paper by John Muth,o introduced rational expectations to macroeconomics.110 Essentially, adaptive expectations modeled behavior as if it were backward-looking while rational expectations modeled economic agents (consumers, producers and investors) who were forward-looking.111 New classical economists also claimed that an economic model would be internally inconsistent if it assumed that the agents it models behave as if they were unaware of the model.112 Under the assumption of rational expectations, models assume agents make predictions based on the optimal forecasts of the model itself.109 This did not imply that people have perfect foresight,113 but that they act with an informed understanding of economic theory and policy.114 Thomas Sargent and Neil Wallace (1975)p applied rational expectations to models with Phillips curve trade-offs between inflation and output and found that monetary policy could not be used to systematically stabilize the economy. Sargent and Wallace's policy ineffectiveness proposition found that economic agents would anticipate inflation and adjust to higher price levels before the influx of monetary stimulus could boost employment and output.115 Only unanticipated monetary policy could increase employment, and no central bank could systematically use monetary policy for expansion without economic agents catching on and anticipating price changes before they could have a stimulative impact.116 Robert E. Hallq applied rational expectations to Friedman's permanent income hypothesis that people base the level of their current spending on their wealth and lifetime income rather than current income.117 Hall found that people will smooth their consumption over time and only alter their consumption patterns when their expectations about future income change.118 Both forms of the permanent income hypothesis challenged the Keynesian view that short-term stabilization policies like tax cuts can stimulate the economy.117 The permanent income view suggests that consumers base their spending on wealth and not income, so a temporary boost in income would only produce a moderate increase in consumption.117 Empirical tests of Hall's hypothesis suggest it may understate boosts in consumption due to income increases;119 however, Hall's work helped to popularize Euler equation models of consumption.119 ### The Lucas critique and microfoundations In 1976 Lucas wrote a paperr criticizing large-scale Keynesian models used for forecasting and policy evaluation. Lucas argued that economic models based on empirical relationships between variables are unstable as policies change: a relationship under one policy regime may be invalid after the regime changes.112 The Lucas's critique went further and argued that a policy's impact is determined by how the policy alters the expectations of economic agents. No model is stable unless it accounts for expectations and how expectations relate to policy.120 New classical economists argued that abandoning the disequilibrium models of Keynesianism and focusing on structure- and behavior-based equilibrium models would remedy these faults.121 Keynesian economists responded by building models with microfoundations grounded in stable theoretical relationships.122 ### Lucas supply theory and business cycle models Lucas and Leonard Rappings laid out the first new classical approach to aggregate supply in 1969. Under their model, changes in employment are based on worker preferences for leisure time. Lucas and Rapping modeled decreases in employment as voluntary choices of workers to reduce their work effort in response to the prevailing wage.123 Lucas (1973)t proposed a business cycle theory based on rational expectations, imperfect information, and market clearing. While building this model, Lucas attempted to incorporate the empirical fact that there had been a trade-off between inflation and output without ceding that money was non-neutral in the short-run.124 This model included the idea of money surprise: monetary policy only matters when it causes people to be surprised or confused by the price of goods changing relative to one another.125 Lucas hypothesized that producers become aware of changes in their own industries before they recognize changes in other industries. Given this assumption, a producer might perceive an increase in general price level as an increase in the demand for his goods. The producer responds by increasing production only to find the "surprise" that prices had increased across the economy generally rather than specifically for his goods.126 This "Lucas supply curve" models output as a function of the "price" or "money surprise," the difference between expected and actual inflation.126 Lucas's "surprise" business cycle theory fell out of favor after the 1970s when empirical evidence failed to support this model.127128 George W. Bush meets Kydland (left) and Prescott (center) at an Oval Office ceremony in 2004 honoring the year's Nobel Laureates. While "money surprise" models floundered, efforts continued to develop a new classical model of the business cycle. A 1982 paper by Kydland and Prescottu introduced real business cycle theory (RBC).129 Under this theory business cycles could be explained entirely by the supply side and represented the economy with systems at constant equilibrium.130 RBC dismissed the need to explain business cycles with price surprise, market failure, price stickiness, uncertainty, and instability.131 Instead, Kydland and Prescott built parsimonious models that explained business cycles with changes in technology and productivity.127 Employment levels changed because these technological and productivity changes altered the desire of people to work.127 RBC rejected the idea of high involuntary unemployment in recessions127 and not only dismissed the idea that money could stabilize the economy but also the monetarist idea that money could destabilize it.132 Real business cycle modelers sought to build macroeconomic models based on microfoundations of Arrow–Debreu133 general equilibrium.134135136137 RBC models were one of the inspirations for dynamic stochastic general equilibrium (DSGE) models. DSGE models have become a common methodological tool for macroeconomists—even those who disagree with new classical theory.129 ## New Keynesian economics New classical economics had pointed out the inherent contradiction of the neoclassical synthesis: Walrasian microeconomics with market-clearing and general equilibrium could not lead to Keynesian macroeconomics where markets failed to clear. New Keynesians recognized this paradox, but, while the new classicals abandoned Keynes, new Keynesians abandoned Walras and market-clearing.138 During the late 1970s and 1980s, new Keynesian researchers investigated how market imperfections like monopolistic competition, nominal frictions like sticky prices, and other frictions made microeconomics consistent with Keynesian macroeconomics.138 New Keynesians often formulated models with rational expectations, which had been proposed by Lucas and adopted by new classical economists.139 ### Nominal and real rigidities Stanley Fischer (1977)v responded to Thomas J. Sargent and Neil Wallace's monetary ineffectiveness proposition and showed how monetary policy could stabilize an economy even with rational expectations.139 Fischer's model showed how monetary policy could have an impact in a model with long-term nominal wage contracts.140 John B. Taylor expanded on Fischer's work and found that monetary policy could have long lasting effects—even after wages and prices had adjusted. Taylor arrived at this result by building on Fischer's model with the assumptions of staggered contract negotiations and contracts that fixed nominal prices and wage rates for extended periods.140 These early new Keynesian theories were based on the basic idea that, given fixed nominal wages, a monetary authority (central bank) can control the employment rate.141 Since wages are fixed at a nominal rate, the monetary authority can control the real wage (wage values adjusted for inflation) by changing the money supply and thus impact the employment rate.141 By the 1980s new Keynesian economists became dissatisfied with these early nominal wage contract models.142 These models predicted that real wages would be countercyclical (real wages would rise when the economy fell), but empirical evidence showed that real wages tended to be independent of economic cycles or even slightly procyclical.143 These contract models also did not make sense from a microeconomic standpoint since it was unclear why firms would use long-term contracts if they led to inefficiencies.141 Instead of looking for rigidities in the labor market, new Keynesians shifted their attention to the goods market and the sticky prices that resulted from "menu cost" models of price change.142 The term refers to the literal cost to a restaurant of printing new menus when it wants to change prices; however, economists also use it to refer to more general costs associated with changing prices, including the expense of evaluating whether to make the change.142 Since firms must spend money to change prices, they do not always adjust them to the point where markets clear, and this lack of price adjustments can explain why the economy may be in disequilibrium.144 Studies using data from the United States Consumer Price Index confirmed that prices do tend to be sticky. A good's price typically changes about every four to six months or, if sales are excluded, every eight to eleven months.145 While some studies suggested that menu costs are too small to have much of an aggregate impact, Laurence Ball and David Romer (1990)w showed that real rigidities could interact with nominal rigidities to create significant disequilibrium. Real rigidities occur whenever a firm is slow to adjust its real prices in response to a changing economic environment. For example, a firm can face real rigidities if it has market power or if its costs for inputs and wages are locked-in by a contract.146147 Ball and Romer argued that real rigidities in the labor market keep a firm's costs high, which makes firms hesitant to cut prices and lose revenue. The expense created by real rigidities combined with the menu cost of changing prices makes it less likely that firm will cut prices to a market clearing level.144 ### Coordination failure In this model of coordination failure, a representative firm ei makes its output decisions based on the average output of all firms (ē). When the representative firm produces as much as the average firm (ei=ē), the economy is at an equilibrium represented by the 45 degree line. The curve represents the output for the individual firm given the output of other firms, and it intersects with the equilibrium line at three equilibrium points. The firms could coordinate and produce at the optimal level of point B, but, without coordination, firms might produce at a less efficient equilibrium.148149 Coordination failure is another potential explanation for recessions and unemployment.150 In recessions a factory can go idle even though there are people willing to work in it, and people willing to buy its production if they had jobs. In such a scenario, economic downturns appear to be the result of coordination failure: The invisible hand fails to coordinate the usual, optimal, flow of production and consumption.151 Russell Cooper and Andrew John (1988)x expressed a general form of coordination as models with multiple equilibria where agents could coordinate to improve (or at least not harm) each of their respective situations.152 Cooper and John based their work on earlier models including Peter Diamond's (1982)y coconut model,153 which demonstrated a case of coordination failure involving search and matching theory.154 In Diamond's model producers are more likely to produce if they see others producing. The increase in possible trading partners increases the likelihood of a given producer finding someone to trade with. As in other cases of coordination failure, Diamond's model has multiple equilibria, and the welfare of one agent is dependent on the decisions of others.155 Diamond's model is an example of a "thick-market externality" that causes markets to function better when more people and firms participate in them.156 Other potential sources of coordination failure include self-fulfilling prophecies. If a firm anticipates a fall in demand, they might cut back on hiring. A lack of job vacancies might worry workers who then cut back on their consumption. This fall in demand meets the firm's expectations, but it is entirely due to the firm's own actions.152 ### Labor market failures New Keynesians offered explanations for the failure of the labor market to clear. In a Walrasian market, unemployed workers bid down wages until the demand for workers meets the supply.157 If markets are Walrasian, the ranks of the unemployed would be limited to workers transitioning between jobs and workers who choose not to work because wages are too low to attract them.158 They developed several theories explaining why markets might leave willing workers unemployed.159 Of these theories, new Keynesians were especially associated with efficiency wages and the insider-outsider model used to explain long-term effects of previous unemployment,160 where short-term increases in unemployment become permanent and lead to higher levels of unemployment in the long-run.161 #### Efficiency wages In the Shapiro-Stiglitz model workers are paid at a level where they do not shirk. This prevents wages from dropping to market clearing levels. Full employment cannot be achieved because workers would slack off if they were not threatened with the possibility of unemployment. The curve for the no-shirking condition (labeled NSC) goes to infinity at full employment. In efficiency wage models, workers are paid at levels that maximize productivity instead of clearing the market.162 For example, in developing countries, firms might pay more than a market rate to ensure their workers can afford enough nutrition to be productive.163 Firms might also pay higher wages to increase loyalty and morale, possibly leading to better productivity.164 Firms can also pay higher than market wages to forestall shirking.164 Shirking models were particularly influential.165 Carl Shapiro and Joseph Stiglitz (1984)z created a model where employees tend to avoid work unless firms can monitor worker effort and threaten slacking employees with unemployment.166 If the economy is at full employment, a fired shirker simply moves to a new job.167 Individual firms pay their workers a premium over the market rate to ensure their workers would rather work and keep their current job instead of shirking and risk having to move to a new job. Since each firm pays more than market clearing wages, the aggregated labor market fails to clear. This creates a pool of unemployed laborers and adds to the expense of getting fired. Workers not only risk a lower wage, they risk being stuck in the pool of unemployed. Keeping wages above market clearing levels creates a serious disincentive to shirk that makes workers more efficient even though it leaves some willing workers unemployed.166 #### Insider-outsider model Economists became interested in hysteresis when unemployment levels spiked with the 1979 oil shock and early 1980s recessions but did not return to the lower levels that had been considered the natural rate.168 Olivier Blanchard and Lawrence Summers (1986)aa explained hysteresis in unemployment with insider-outsider models, which were also proposed by of Assar Lindbeck and Dennis Snower in a series of papers and then a book.ab Insiders, employees already working at a firm, are only concerned about their own welfare. They would rather keep their wages high than cut pay and expand employment. The unemployed, outsiders, do not have any voice in the wage bargaining process, so their interests are not represented. When unemployment increases, the number of outsiders increases as well. Even after the economy has recovered, outsiders continue to be disenfranchised from the bargaining process.169 The larger pool of outsiders created by periods of economic retraction can lead to persistently higher levels of unemployment.169 The presence of hysteresis in the labor market also raises the importance of monetary and fiscal policy. If temporary downturns in the economy can create long term increases in unemployment, stabilization policies do more than provide temporary relief; they prevent short term shocks from becoming long term increases in unemployment.170 ## New growth theory Empirical evidence showed that growth rates of low income countries varied widely instead of converging to a uniform income level.171 Following research on the neoclassical growth model in the 1950s and 1960s, little work on economic growth occurred until 1985.54 Papers by Paul Romeracad were particularly influential in igniting the revival of growth research.172 Beginning in the mid-1980s and booming in the early 1990s many macroeconomists shifted their focus to the long-run and started "new growth" theories, including endogenous growth.173172 Growth economists sought to explain empirical facts including the failure of sub-Saharan Africa to catch up in growth, the booming East Asian Tigers, and the slowdown in productivity growth in the United States prior to the technology boom of the 1990s.174 Three families of new growth models challenged neo-classical models.175 The first challenged the assumption of previous models that the economic benefits of capital would decrease over time. These early new growth models incorporated positive externalities to capital accumulation where one firm's investment in technology generates spillover benefits to other firms because knowledge spreads.176 The second focused on the role of innovation in growth. These models focused on the need to encourage innovation through patents and other incentives.177 A third set, referred to as the "neoclassical revival", expanded the definition of capital in exogenous growth theory to include human capital.178 This strain of research began with Mankiw, Romer, and Weil (1992),ae which showed that 78% of the cross-country variance in growth could be explained by a Solow model augmented with human capital.179 Endogenous growth theories implied that countries could experience rapid "catch-up" growth through an open society that encouraged the inflow of technology and ideas from other nations.180 Endogenous growth theory also suggested that governments should intervene to encourage investment in research and development because the private sector might not invest at optimal levels.180 ## New synthesis Based on the DSGE model in Christiano, Eichenbaum, and Evans (2005),af impulse response functions show the effects of a one standard deviation monetary policy shock on other economic variables over 20 quarters. A new synthesis, called the "new neoclassical synthesis" or simply the "new synthesis," emerged in the 1990s. This synthesis used ideas from both the new Keynesian and new classical schools.181 From the new classical school, it adapted RBC hypotheses, including rational expectations, and methods;182 from the new Keynesian school, it took nominal rigidities (price stickiness)150 and other market imperfections.183 New synthesis theory has developed RBC models called dynamic stochastic general equilibrium (DSGE) models.184 DSGE models formulate hypotheses about the behaviors and preferences of firms and households; numerical solutions of the resulting DSGE models are computed.185 These models also included a "stochastic" element created by shocks to the economy. In the original RBC models these shocks were limited to technological change, but more recent models have incorporated other real changes.186 DSGE models have another theoretical advantage, avoiding the Lucas critique.187 The new synthesis was adopted by academic economists and soon by policy makers, such as central bankers.150 Econometric analysis of DSGE models suggested that real factors sometimes affect the economy. A paper by Frank Smets and Rafael Woulters (2007)ag stated that monetary policy explained only a small part of the fluctuations in economic output.188 In new synthesis models, shocks can affect both demand and supply.189 The new synthesis also implies that monetary policy can have a stabilizing effect on the economy, contrary to new classical theory.189190 Under the synthesis, debates have become less ideological and more methodological.191 Business cycle modelers can be broken into two camps: those in favor of calibration and those in favor of estimation.191 When models are calibrated, the modeler selects parameter values based on other studies or casual empirical observation.192 Instead of using statistical diagnostics to evaluate models, the model's operating characteristics to determine the quality of the model.193 Kydland and Prescott (1982) offered no formal evaluation of their model, but noted how variables like hours worked did not match real data well while the variances of other elements of the model did.194 When estimation methods are used, models are evaluated based on standard statistical goodness of fit criteria.195 Calibration is generally associated with real business cycle modelers of the new classical school; however, while Lucas, Prescott, and Kydland are calibration advocates, Sargent favors estimation.195 ## Financial crisis and the breakdown of consensus The 2007–2008 financial crisis and subsequent Great Recession challenged macroeconomic theory. Few economists predicted the crisis, and, even afterwards, there was great disagreement on how to address it.196 The new synthesis consensus broke down as economists debated policy responses to deal with the deep recession. The new synthesis formed during the Great Moderation and had not been tested in a severe economic environment.197 Many economists agree that the crisis stemmed from an economic bubble, but neither of the major macroeconomic schools had paid much attention to finance or a theory of asset bubbles:196 how they form, how they can be recognized, and how they can be prevented. The failures of current economic theory to deal with the crisis spurred economists to reevaluate their thinking.198 Commentary ridiculed the mainstream and proposed a major reassessment.199 Elements of modern macroeconomic consensus were criticized following the financial crisis. Robert Solow testified before the U.S. Congress that DSGE models had "nothing useful to say about antirecession policy" because the conclusion that macroeconomic policy is impotent is built into the "essentially implausible assumptions" behind the model.200 Solow also criticized DSGE models for frequently assuming that a single, "representative agent" can represent the complex interaction of the many diverse agents that make up the real world.201 Robert Gordon criticized much of macroeconomics after 1978. Gordon called for a renewal of disequilibrium theorizing and disequilibrium modeling. He disparaged both new classical and new Keynesian economists who assumed that markets clear; he called for a renewal of economic models that could included both market-clearing and sticky-priced goods, such as oil and housing respectively.202 While criticizing DSGE models, Ricardo J. Caballero argues that recent work in finance shows progress and suggests that modern macroeconomics needed to be re-centered but not scrapped.203 ## Heterodox theories Heterodox economists adhere to theories sufficiently outside the mainstream to be marginalized204 and treated as irrelevant by the establishment.205 Initially, heterodox economists including Joan Robinson, worked alongside mainstream economists, but heterodox groups isolated themselves and created insular groups in the late 1960s and 1970s.206 Present day heterodox economists often publish in their own journals rather than those of the mainstream204 and eschew formal modelling in favor of more abstract theoretical work.204 The 2008 financial crisis and subsequent recession highlighted limitations of existing macroeconomic theories, models, and econometrics. The popular press discussed Post Keynesian economics207 and Austrian economics, two heterodox traditions that have little influence on mainstream economics.208209 ### Post Keynesian economics While Neo-Keynesians integrated Keynes's ideas with Neoclassical theory, Post Keynesians went in other directions. Post Keynesians opposed the neoclassical synthesis and shared a fundamentalist interpretation of Keynes that sought to develop economic theories without classical elements.210 The core of Post Keynesian belief is the rejection of three axioms that are central to classical and mainstream Keynesian views: the neutrality of money, gross substitution, and the ergodic axiom.211212 Post Keynesians not only reject the neutrality of money in the short-run, they also see money as an important factor in the long-run,211 a view other Keynesians dropped in the 1970s. Gross substitution implies that goods are interchangeable. Relative price changes cause people to shift their consumption in proportion to the change.213 The ergodic axiom asserts that the future of the economy can be predicted based on the past and present market conditions. Without the ergodic assumption, agents are unable to form rational expectations, undermining new classical theory.213 In a non-ergodic economy, predictions are very hard to make and decision-making is hampered by uncertainty. Partly because of uncertainty, Post Keynesians take a different stance on sticky prices and wages than new Keynesians. They do not see nominal rigidities as an explanation for the failure of markets to clear. They instead think sticky prices and long-term contracts anchor expectations and alleviate uncertainty that hinders efficient markets.214 Post Keynesian economic policies emphasize the need to reduce uncertainty in the economy including safety nets and price stability.215212 Hyman Minsky applied Post Keynesian notions of uncertainty and instability to a theory of financial crisis where investors increasingly take on debt until their returns can no longer pay the interest on leveraged assets, resulting in a financial crisis.212 The financial crisis of 2007–2008 brought mainstream attention to Minsky's work.207 Friedrich Hayek, founder of Austrian business cycle theory Austrian economics began with Carl Menger's 1871 Principles of Economics. Menger's followers formed a distinct group of economists until around the Second World War when the distinction between Austrian economics and other schools of thought had largely broken down. The Austrian tradition survived as a distinct school, however, through the works of Ludwig von Mises and Friedrich Hayek. Present day Austrians are distinguished by their interest in earlier Austrian works and abstention from standard empirical methodology including econometrics. Austrians also focus on market processes instead of equilibrium.216 Mainstream economists are generally critical of its methodology.217218 Hayek created the Austrian business cycle theory, which synthesizes Menger's capital theory and Mises's theory of money and credit.219 The theory proposes a model of inter-temporal investment in which production plans precede the manufacture of the finished product. The producers revise production plans to adapt to changes in consumer preferences.220 Producers respond to "derived demand," which is estimated demand for the future, instead of current demand. If consumers reduce their spending, producers believe that consumers are saving for additional spending later, so that production remains constant.221 Combined with a market of loanable funds (which relates savings and investment through the interest rate), this theory of capital production leads to a model of the macroeconomy where markets reflect inter-temporal preferences.222 Hayek's model suggests that an economic bubble begins when cheap credit initiates a boom where resources are misallocated, so that early stages of production receive more resources than they should and overproduction begins; the later stages of capital are not funded for maintenance to prevent depreciation.223 Overproduction in the early stages cannot be processed by poorly maintained later stage capital. The boom becomes a bust when a lack of finished goods leads to "forced saving" since fewer finished goods can be produced for sale.223 ## Notes 1. ^ Hicks, J. R. (April 1937). "Mr. Keynes and the "Classics"; A Suggested Interpretation". Econometrica 5 (2): 147–159. doi:10.2307/1907242. JSTOR 1907242. 2. ^ Modigliani, Franco (January 1944). "Liquidity Preference and the Theory of Interest and Money". Econometrica 1 (12): 45–88. doi:10.2307/1905567. JSTOR 1905567. 3. ^ Solow, Robert M. (February 1956). "A Contribution to the Theory of Economic Growth". The Quarterly Journal of Economics (Oxford University Press) 70 (1): 65–94. doi:10.2307/1884513. JSTOR 1884513. 4. ^ Swan, T. W. (1956). "Economic Growth and Capital Accumulation". Economic Record 32 (2): 334–361. doi:10.1111/j.1475-4932.1956.tb00434.x. 5. ^ Phillips, A. W. (November 1958). "The Relation between Unemployment and the Rate of Change of Money Wage Rates in the United Kingdom, 1861-1957". Economica 25 (100): 283–299. doi:10.2307/2550759. JSTOR 2550759. 6. ^ Lipsey, R.G. (February 1960). "The Relation between Unemployment and the Rate of Change of Money Wage Rates in the United Kingdom, 1862–1957: A Further Analysis". Economica 27 (105): 1–31. doi:10.2307/2551424. JSTOR 2551424. 7. ^ Clower, Robert W. (1965). "The Keynesian Counterrevolution: A Theoretical Appraisal". In Hahn, F. H., F.H.; Brechling, F. P.R. The Theory of Interest Rates. London: Macmillan. 8. ^ Leijonhufvud, Axel (1968). On Keynesian economics and the economics of Keynes : a study in monetary theory. London: Oxford University Press. ISBN 978-0-19-500948-4. 9. ^ Barro, Robert J.; Grossman, Herschel I. (1971). "A General Disequilibrium Model of Income and Employment". American Economic Review 61 (1): 82–93. JSTOR 1910543. 10. ^ Benassy, Jean-Pascal (October 1975). "Neo-Keynesian Disequilibrium Theory in a Monetary Economy". The Review of Economic Studies (Oxford University Press) 42 (4): 50–523. doi:10.2307/2296791. JSTOR 2296791. 11. ^ Younes, Y. (October 1975). "On the Role of Money in the Process of Exchange and the Existence of a Non- Walrasian Equilibrium". The Review of Economic Studies (Oxford University Press) 42 (4): 489–501. doi:10.2307/2296790. JSTOR 2296790. 12. ^ Malinvaud, Edmond (1977). The Theory of Unemployment Reconsidered. Yrjo Jahnsson lectures. Oxford: Basil Blackwell. ISBN 063117690X. 13. ^ Friedman, Milton (1956). "The Quantity Theory of Money: A Restatement". In Friedman, Milton. Studies in the Quantity Theory of Money. Chicago: University of Chicago Press. 14. ^ Lucas, Robert E. (1972). "Expectations and the Neutrality of Money". Journal of Economic Theory 4 (2): 103–123. doi:10.1016/0022-0531(72)90142-1. 15. ^ Muth, John F. (1961). "Rational Expectations and the Theory of Price Movements". Econometrica 29 (3): 315–335. doi:10.2307/1909635. JSTOR 1909635. 16. ^ Sargent, Thomas J.; Wallace, Neil (1975). "'Rational' Expectations, the Optimal Monetary Instrument, and the Optimal Money Supply Rule". Journal of Political Economy 83 (2): 241–54. doi:10.1086/260321. JSTOR 1830921. 17. ^ Hall, Robert E. (1978). "Stochastic Implications of the Life Cycle-Permanent Income Hypothesis: Theory and Evidence". Journal of Political Economy 86 (6): 971–987. doi:10.2307/1840393. JSTOR 1840393. 18. ^ Lucas, Robert (1976). "Econometric Policy Evaluation: A Critique". In Brunner, K.; Meltzer, A. The Phillips Curve and Labor Markets. Carnegie-Rochester Conference Series on Public Policy 1. New York: American Elsevier. pp. 19–46. ISBN 978-0-444-11007-7 19. ^ Lucas, R.E.; Rapping, L.A. (1969). "Real Wages, Employment and Inflation". Journal of Political Economy 77 (5): 721–754. doi:10.1086/259559. JSTOR 1829964. 20. ^ Lucas, R. E. (1973). "Some International Evidence on Output-Inflation Tradeoffs". The American Economic Review 63 (3): 326–334. doi:10.2307/1914364. 21. ^ Kydland, F. E.; Prescott, E. C. (1982). "Time to Build and Aggregate Fluctuations". Econometrica 50 (6): 1345–1370. doi:10.2307/1913386. 22. ^ Fischer, S. (1977). "Long-Term Contracts, Rational Expectations, and the Optimal Money Supply Rule". The Journal of Political Economy 85 (1): 191–205. doi:10.1086/260551. 23. ^ Ball, L.; Romer, D. (1990). "Real Rigidities and the Non-Neutrality of Money". The Review of Economic Studies 57 (2): 183–203. doi:10.2307/2297377. 24. ^ Cooper, R.; John, A. (1988). "Coordinating Coordination Failures in Keynesian Models". The Quarterly Journal of Economics 103 (3): 441–463. doi:10.2307/1885539. JSTOR 1885539. 25. ^ Diamond, Peter A. (October 1982). "Aggregate Demand Management in Search Equilibrium". Journal of Political Economy 90 (5): 881–894. doi:10.2307/1837124. 26. ^ Shapiro, C.; Stiglitz, J. E. (1984). "Equilibrium Unemployment as a Worker Discipline Device". The American Economic Review 74 (3): 433–444. doi:10.2307/1804018. 27. ^ Blanchard, O. J.; Summers, L. H. (1986). "Hysteresis and the European Unemployment Problem". NBER Macroeconomics Annual 1: 15–78. doi:10.2307/3585159. 28. ^ Lindbeck, Assar; Snower, Dennis (1988). The insider-outsider theory of employment and unemployment. Cambridge, Massachusetts: MIT Press. ISBN 978-0-262-62074-1. 29. ^ Romer, Paul M. (October 1990). "Endogenous Technological Change". Journal of Political Economy 98 (5): S71–S102. doi:10.2307/2937632. JSTOR 2937632. 30. ^ Romer, Paul M. (October 1986). "Increasing Returns and Long-Run Growth". Journal of Political Economy 94 (5): 1002–1037. doi:10.2307/1833190. JSTOR 1833190. 31. ^ Mankiw, N. Gregory; Romer, David; Weil, David N. (May 1992). "A Contribution to the Empirics of Economic Growth". The Quarterly Journal of Economics 107 (2): 407–437. doi:10.2307/2118477. JSTOR 2118477. 32. ^ Christiano, Lawrence J.; Eichenbaum, Martin; Evans, Charles L. (2005). "Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy". Journal of Political Economy 113 (1): 1–45. doi:10.2307.2F426038. JSTOR 426038. 33. ^ Smets, Frank; Wouters, Rafael (2007). "Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach". American Economic Review 97 (3): 586–606. doi:10.1257/aer.97.3.586. ## Citations 1. ^ a b Blanchard 2000, p. 1377. 2. ^ a b c d 3. ^ For an illuminating family tree of twentieth century macro, see Goodspeed 2012, p. 3. 4. ^ a b Snowdon & Vane 2005, p. 69. 5. ^ a b c d 6. ^ Mankiw 2006, p. 37-38. 7. ^ Froyen 1990, p. 70. 8. ^ Marcuzzo & Roselli 2005, p. 154. 9. ^ a b Dimand 2003, p. 327. 10. ^ a b c d Blanchard 2000, pp. 1378–1379. 11. ^ Dimand 2003, p. 333. 12. ^ Woodford 1999, p. 4. 13. ^ Case & Fair 2006, pp. 400–401. 14. ^ a b c d Snowdon & Vane 2005, p. 50. 15. ^ Harrington 2002, pp. 125–126. 16. ^ Snowdon & Vane 2005, pp. 69–70. 17. ^ Snowdon & Vane 2005, p. 52. 18. ^ Case & Fair 2006, p. 685. 19. ^ Froyen 1990, pp. 70–71. 20. ^ Skidelsky 2003, p. 131. 21. ^ 22. ^ a b c 23. ^ Snowdon & Vane 2005, p. 13. 24. ^ Snowdon & Vane 2005, p. 70. 25. ^ a b Snowdon & Vane 2005, p. 63. 26. ^ Snowdon & Vane 2005, p. 49. 27. ^ Snowdon & Vane 2005, p. 58. 28. ^ 29. ^ Snowdon & Vane 2005, p. 46. 30. ^ a b Snowdon & Vane 2005, p. 59. 31. ^ Froyen 1990, p. 97. 32. ^ a b c 33. ^ Snowdon & Vane 2005, p. 76. 34. ^ a b Snowdon & Vane 2005, p. 55. 35. ^ a b Snowdon & Vane 2005, pp. 70–71. 36. ^ Snowdon & Vane 2005, p. 71. 37. ^ a b 38. ^ Backhouse 1997, p. 43. 39. ^ Romer 1993, p. 5. 40. ^ Backhouse 1997, p. 37. 41. ^ Backhouse 1997, p. 42. 42. ^ Fletcher 2002, p. 522. 43. ^ Snowdon & Vane 2005, p. 101. 44. ^ Skidelsky 2009, pp. 103–104. 45. ^ Skidelsky 2009, p. 104. 46. ^ 47. ^ a b Blanchard 2000, p. 1379. 48. ^ Snowdon & Vane 2005, p. 106. 49. ^ Snowdon & Vane 2005, p. 102. 50. ^ 51. ^ Froyen 1990, p. 173. 52. ^ Fletcher 2002, p. 524. 53. ^ Snowdon & Vane 2005, pp. 585–586. 54. ^ a b c d Snowdon & Vane 2005, p. 586. 55. ^ 56. ^ Snowdon & Vane 2002, p. 316. 57. ^ Snowdon & Vane 2002, p. 316-317. 58. ^ Snowdon & Vane 2002, p. 319. 59. ^ 60. ^ Solow 2002, p. 519. 61. ^ a b Blanchard 2000, p. 1383. 62. ^ Mankiw 2005, p. 31. 63. ^ Goodfriend & King 1997, p. 234. 64. ^ Goodfriend & King 1997, p. 236. 65. ^ Mishkin 2004, p. 537. 66. ^ Blanchard 2000, p. 1385. 67. ^ Goodfriend & King 1997, pp. 234–236. 68. ^ 69. ^ Manikw 2006, p. 33. 70. ^ Beaud & Dostaler 1997, p. 122. 71. ^ Beaud & Dostaler 1997, pp. 121–123. 72. ^ Tsoulfidis 2010, p. 288. 73. ^ De Vroey 2002, p. 383. 74. ^ Hoover 2003, p. 419. 75. ^ 76. ^ Snowdon & Vane 2005, p. 72. 77. ^ 78. ^ Tsoulfidis 2010, p. 294. 79. ^ Beaud & Dostaler 1997, p. 123. 80. ^ Tsoulfidis 2010, p. 293. 81. ^ a b Tsoulfidis, p. 293. 82. ^ Tsoulfidis, p. 294. 83. ^ Tsoulfidis, p. 295. 84. ^ Tsoulfidis 2010, p. 295. 85. ^ Case & Fair 2006, p. 684. 86. ^ Romer 2006, p. 252. 87. ^ Mishkin 2004, p. 608. 88. ^ Mishkin 2004, pp. 607–608. 89. ^ Mishkin 2004, pp. 607–610. 90. ^ Mishkin 2004, p. 528. 91. ^ a b Mishkin 2004, p. 529. 92. ^ DeLong 2000, p. 86. 93. ^ DeLong 2000, p. 89. 94. ^ Krugman & Wells 2009, p. 893. 95. ^ a b DeLong 2000, p. 91. 96. ^ DeLong 2000, p. 90. 97. ^ Woodford 1999, pp. 18. 98. ^ DeLong 2000, p. 84. 99. ^ a b DeLong 2000, p. 92. 100. ^ Woodford 1999, pp. 18–19. 101. ^ Froyen 1990, p. 331. 102. ^ Mankiw 2006, p. 5. 103. ^ Froyen 1990, p. 333. 104. ^ Froyen 1990, p. 332. 105. ^ Woodford 2009, p. 268. 106. ^ a b Snowdon & Vane 2005, p. 220. 107. ^ Dindo 2007, p. 8. 108. ^ 109. ^ a b c Mishkin 2004, p. 147. 110. ^ Woodford 1999, p. 20. 111. ^ Froyen 1990, p. 335. 112. ^ a b 113. ^ Snowdon & Vane 2005, p. 226. 114. ^ Froyen 1990, pp. 334–335. 115. ^ Mankiw 1990, p. 1649. 116. ^ Snowdon & Vane 2005, pp. 243–244. 117. ^ a b c 118. ^ Mankiw 1990, p. 1651. 119. ^ a b Mankiw 1990, p. 1652. 120. ^ Mishkin 2004, p. 660. 121. ^ Snowdon & Vane 2005, p. 266. 122. ^ Snowdon & Vane 2005, p. 340. 123. ^ Snowdon & Vane 2005, p. 233. 124. ^ Snowdon & Vane 2005, p. 235. 125. ^ Mankiw 2006, p. 6. 126. ^ a b Case & Fair 2006, p. 691. 127. ^ a b c d Mankiw 1990, p. 1653. 128. ^ Hoover 2003, p. 423. 129. ^ a b Mankiw 2006, p. 7. 130. ^ Snowdon & Vane 2005, p. 294. 131. ^ Snowdon & Vane 2005, p. 295. 132. ^ Mankiw 1990, pp. 1653–1654. 133. ^ Hahn & Solow 1997, p. 2. 134. ^ Mark 2001, p. 107. 135. ^ Romer 2005, p. 215. 136. ^ Christiano & Fitzgerald 2001, p. 46n. 137. ^ Mankiw 2006, p. 34. 138. ^ a b Romer 1993, p. 6. 139. ^ a b Mankiw 2006, p. 36. 140. ^ a b Mankiw & Romer 1991, p. 6. 141. ^ a b c Mankiw 1990, p. 1656. 142. ^ a b c Mankiw 1990, p. 1657. 143. ^ Mankiw 1990, pp. 1656–1657. 144. ^ a b Mankiw 1990, p. 1658. 145. ^ Galí 2008, pp. 6–7. 146. ^ Romer 2005, pp. 294–296. 147. ^ Snowdon & Vane 2005, pp. 380–381. 148. ^ Romer 1993, p. 15. 149. ^ Cooper & John 1988, p. 446. 150. ^ a b c 151. ^ Howitt 2002, pp. 140–141. 152. ^ a b Howitt 2002, p. 142. 153. ^ 154. ^ Cooper & John 1988, p. 452. 155. ^ Cooper & John 1988, pp. 452–453. 156. ^ Mankiw & Romer 1991, p. 8. 157. ^ Romer 2006, p. 438. 158. ^ Romer 2006, pp. 437–439. 159. ^ Romer 2006, p. 437. 160. ^ Snowdon & Vane 2005, p. 384. 161. ^ Romer 2005, p. 471. 162. ^ Froyen 1990, p. 357. 163. ^ Romer 2006, p. 439. 164. ^ a b Froyen 1990, p. 358. 165. ^ Romer 2006, p. 448. 166. ^ a b Snowdon & Vane 2005, p. 390. 167. ^ Romer 2006, p. 453. 168. ^ Snowdon & Vane 2005, p. 332. 169. ^ a b Romer 2006, p. 468. 170. ^ Snowdon & Vane 2005, p. 335. 171. ^ Durlauf, Johnson & Temple 2005, p. 568. 172. ^ a b Mankiw 2006, p. 37. 173. ^ Snowdon & Vane 2005, p. 585. 174. ^ Snowdon & Vane 2005, p. 587. 175. ^ Snowdon & Vane 2005, pp. 624–625. 176. ^ Snowdon & Vane 2005, p. 628. 177. ^ Snowdon & Vane 2005, pp. 628–629. 178. ^ Snowdon & Vane 2005, p. 625. 179. ^ Klenow & Rodriguez-Clare 1997, p. 73. 180. ^ a b Snowdon & Vane 2005, p. 630. 181. ^ Goodfriend & King 1997, p. 256. 182. ^ Goodfriend & King 1997, pp. 255–256. 183. ^ Blanchard 2000, pp. 1404–1405. 184. ^ Mankiw 2006, p. 39. 185. ^ Kocherlakota 2010, pp. 9–10. 186. ^ Woodford 2009, pp. 272–273. 187. ^ Kocherlakota 2010, p. 6. 188. ^ Woodford 2009, p. 272. 189. ^ a b Woodford 2009, p. 273. 190. ^ Kocherlakota 2010, p. 11. 191. ^ a b Woodford 2009, p. 271. 192. ^ 193. ^ Quah 1995, p. 1594. 194. ^ 195. ^ a b Hoover 1995, p. 25. 196. ^ a b 197. ^ 198. ^ 199. ^ 200. ^ Solow 2010, p. 3. 201. ^ Solow 2010, p. 2. 202. ^ Gordon 2009, p. 1. 203. ^ Caballero 2010, p. 18. 204. ^ a b c Backhouse 2010, pp. 154. 205. ^ 206. ^ Backhouse 2010, pp. 160. 207. ^ a b 208. ^ 209. ^ 210. ^ Cottrell 1994, p. 2. 211. ^ a b Davidson 2005, p. 472. 212. ^ a b c 213. ^ a b Davidson 2003, p. 43. 214. ^ Cottrell 1994, pp. 9–10. 215. ^ Davidson 2005, p. 473. 216. ^ 217. ^ 218. ^ 219. ^ Garrison 2005, p. 475. 220. ^ Garrison 2005, pp. 480–481. 221. ^ Garrison 2005, p. 487. 222. ^ Garrison 2005, pp. 495–496. 223. ^ a b Garrison 2005, p. 508. ## References ### Books Friedman, Benjamin M., and Frank H. Hahn, ed. , 1990. v. 1 links for description & contents and chapter-outline previews _____, 1990. v. 2 links for description & contents and chapter-outline previews. Friedman, Benjamin, and Michael Woodford, 2010. v. 3A & 3B links for description & and chapter abstracts. ### Articles HPTS - Area Progetti - Edu-Soft - JavaEdu - N.Saperi - Ass.Scuola.. - TS BCTV - TS VideoRes - TSODP - TRTWE TSE-Wiki - Blog Lavoro - InterAzioni- NormaScuola - Editoriali - Job Search - DownFree ! TerritorioScuola. Some rights reserved. Informazioni d'uso ☞
proofpile-shard-0030-177
{ "provenance": "003.jsonl.gz:178" }
## Found 5,633 Documents (Results 1–100) 100 MathJax MSC:  65-XX Full Text: MSC:  54A20 Full Text: ### Physics-informed neural networks for learning the homogenized coefficients of multiscale elliptic equations. (English)Zbl 07568534 MSC:  35Bxx 65Nxx 35Rxx Full Text: ### (English)Zbl 07568516 MSC:  42C10 42A16 42A20 Full Text: Full Text: ### New self-adaptive methods with double inertial steps for solving splitting monotone variational inclusion problems with applications. (English)Zbl 07567382 MSC:  47Hxx 90Cxx 47Jxx Full Text: Full Text: Full Text: ### On the robust regression for a censored response data in the single functional index model. (English)Zbl 07565483 MSC:  62G05 62G20 Full Text: Full Text: ### The CUSUM statistics of change-point models based on dependent sequences. (English)Zbl 07563014 MSC:  62Pxx 62F12 Full Text: ### Anisotropic functional deconvolution for the irregular design: a minimax study. (English)Zbl 07562249 MSC:  62G05 62G20 62G08 Full Text: ### An accelerated common fixed point algorithm for a countable family of $$G$$-nonexpansive mappings with applications to image recovery. (English)Zbl 07562146 MSC:  47Hxx 47Jxx 90Cxx Full Text: Full Text: ### Uniform convergence of local Fréchet regression with applications to locating extrema and time warping for metric space valued trajectories. (English)Zbl 07547942 MSC:  62G05 62G20 62G08 Full Text: ### A new generalized variant of the deteriorated PSS preconditioner for nonsymmetric saddle point problems. (English)Zbl 07546710 MSC:  65-XX 62-XX Full Text: Full Text: ### High-order finite difference method based on linear barycentric rational interpolation for Caputo type sub-diffusion equation. (English)Zbl 07538450 MSC:  65-XX 76-XX Full Text: Full Text: ### On the $$1/H$$-flow by $$p$$-Laplace approximation: new estimates via fake distances under Ricci lower bounds. (English)Zbl 07536909 MSC:  53E10 35D30 53C23 Full Text: Full Text: Full Text: Full Text: ### Stability and convergence analysis of adaptive BDF2 scheme for the Swift-Hohenberg equation. (English)Zbl 07526833 MSC:  65Mxx 35Kxx 35Bxx Full Text: Full Text: Full Text: Full Text: Full Text: ### Deconvolving cumulative density from associated random processes. (English)Zbl 1486.62090 MSC:  62G07 62G20 Full Text: Full Text: ### un L- and M-weakly compact operators on Banach lattices. (English)Zbl 07514806 MSC:  47B60 47B07 46B42 Full Text: Full Text: Full Text: ### Some characterizations of L-weakly compact sets using the unbounded absolute weak convergence and applications. (English)Zbl 07513931 MSC:  46B42 46B50 47B07 Full Text: Full Text: Full Text: ### Certain observations on selection principles from (a) bornological viewpoint. (English)Zbl 07506801 MSC:  54D20 54C35 54A25 Full Text: ### Approximation by a novel Miheşan type summation-integral operator. (English)Zbl 07506418 MSC:  41A25 41A36 41A35 Full Text: Full Text: Full Text: Full Text: ### Rate of convergence for geometric inference based on the empirical Christoffel function. (English)Zbl 07496937 MSC:  62G07 42C05 62G20 Full Text: ### Convergence of series of moments on general exponential inequality. (English)Zbl 1484.60037 MSC:  60F15 62F12 Full Text: ### A regular non-weakly discretely generated $$P$$-space. (English)Zbl 07491525 Reviewer: K. P. Hart (Delft) MSC:  54A25 54G10 Full Text: Full Text: ### Degree of approximation of Fourier series of functions in Besov space by deferred Nörlund mean. (English)Zbl 1484.42003 MSC:  42A10 42A16 42A20 Full Text: Full Text: Full Text: Full Text: ### Further applications of bornological covering properties in function spaces. (English)Zbl 1486.54038 MSC:  54D20 54C35 54A25 Full Text: Full Text: ### Optimal convergence rates for the invariant density estimation of jump-diffusion processes. (English)Zbl 07474306 MSC:  62G07 62G20 60J74 Full Text: ### Derivation of a one-dimensional von Kármán theory for viscoelastic ribbons. (English)Zbl 1486.74093 MSC:  74K20 74Q20 74D10 Full Text: Full Text: MSC:  65-XX Full Text: Full Text: ### A new concept of convergence for iterative methods: restricted global convergence. (English)Zbl 1481.35188 MSC:  35J60 47H99 65J15 Full Text: ### Local convergence comparison between frozen Kurchatov and Schmidt-Schwetlick-Kurchatov solvers with applications. (English)Zbl 07444612 MSC:  47H99 65H10 65J15 Full Text: ### Minimal usco maps and cardinal invariants of the topology of uniform convergence on compacta. (English)Zbl 1479.54037 MSC:  54C35 54C60 54A25 Full Text: ### Arhangelskii’s $$\alpha$$-principles and selection games. (English)Zbl 1462.54007 Reviewer: K. P. Hart (Delft) MSC:  54A20 54D20 91A44 Full Text: ### Strong convergence properties for weighted sums of WNOD random variables and its applications in nonparametric regression models. (English)Zbl 07553832 MSC:  60F15 62G20 Full Text: ### Lower semi-continuity for $$\mathcal{A}$$-quasiconvex functionals under convex restrictions. (English)Zbl 07549317 MSC:  49J45 35E10 35Q35 Full Text: ### On the small sample behavior of Dirichlet process mixture models for data supported on compact intervals. (English)Zbl 07545579 MSC:  62G07 62F15 62G20 Full Text: ### Minimax rate in prediction for functional principal component regression. (English)Zbl 07532946 MSC:  62G20 62-XX Full Text: ### Local linear estimation of the conditional quantile for censored data and functional regressors. (English)Zbl 07530981 MSC:  62G05 62G20 62-XX Full Text: ### $$G$$-attractor and $$G$$-expansivity of the $$G$$-uniform limit of a sequence of dynamical systems. (English)Zbl 07528095 MSC:  37B05 37C85 54A20 Full Text: ### Moments estimators and omnibus chi-square tests for some usual probability laws. (English. French summary)Zbl 1485.62017 MSC:  62F03 62F12 62F15 Full Text: ### A convergent finite difference method for optimal transport on the sphere. (English)Zbl 07515866 MSC:  35Jxx 65Nxx 35Bxx Full Text: ### Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors. (English)Zbl 1485.62089 MSC:  62J05 62F15 62G20 Full Text: Full Text: Full Text: ### A robust iterative approach for solving nonlinear Volterra delay integro-differential equations. (English)Zbl 07504263 MSC:  47Hxx 47Jxx 65Jxx Full Text: ### Functions with general monotone Fourier coefficients. (English. Russian original)Zbl 07503404 Russ. Math. Surv. 76, No. 6, 951-1017 (2021); translation from Usp. Mat. Nauk 76, No. 6, 3-70 (2021). Full Text: ### Spectral expansions of non-self-adjoint generalized Laguerre semigroups. (English)Zbl 07492804 Memoirs of the American Mathematical Society 1336. Providence, RI: American Mathematical Society (AMS) (ISBN 978-1-4704-4936-0/pbk; 978-1-4704-6752-4/ebook). v, 182 p. (2021). Full Text: Full Text: Full Text: ### Spike and slab Pólya tree posterior densities: adaptive inference. (English)Zbl 07481258 MSC:  62G20 62G07 62G15 Full Text: Full Text: ### On the curved exponential family in the stochastic approximation expectation maximization algorithm. (English)Zbl 07474301 MSC:  62F12 62L20 Full Text: ### Complete $$f$$-moment convergence of moving average processes and its application to nonparametric regression models. (English)Zbl 07473162 MSC:  62G20 62Jxx Full Text: Full Text: ### Smoothness estimation of nonstationary Gaussian random fields from irregularly spaced data observed along a curve. (English)Zbl 07471524 MSC:  62M30 62F12 Full Text: ### Rate of estimation for the stationary distribution of jump-processes over anisotropic Hölder classes. (Rate of estimation for the stationary distribution of jump-processes over anisotropic Holder classes.) (English)Zbl 07471500 MSC:  62G07 62G20 60J74 Full Text: Full Text: ### Outer space as a combinatorial backbone for Cutkosky rules and coactions. (English)Zbl 1484.81043 Bluemlein, Johannes (ed.) et al., Anti-differentiation and the calculation of Feynman amplitudes. Selected papers based on the presentations at the conference, Zeuthen, Germany, October 2020. Cham: Springer. Texts Monogr. Symb. Comput., 279-312 (2021). Full Text: ### Complete $$f$$-moment convergence for Sung’s type weighted sums and its application to the EV regression models. (English)Zbl 1482.60049 MSC:  60F15 62F12 62J99 Full Text: Full Text: Full Text: Full Text: Full Text: ### Rate of convergence of a risk estimator to the normal law in a multiple hypothesis testing problem using the FDR threshold. (English. Russian original)Zbl 1479.62058 Mosc. Univ. Comput. Math. Cybern. 45, No. 3, 114-119 (2021); translation from Vestn. Mosk. Univ., Ser. XV 2021, No. 3, 31-36 (2021). MSC:  62J15 62F12 62H15 Full Text: ### Density and capacity of balleans generated by filters. (English)Zbl 1480.54020 Ukr. Math. J. 73, No. 4, 547-555 (2021) and Ukr. Mat. Zh. 73, No. 4, 467-473 (2021). Full Text: ### Almost everywhere convergence of the Cesàro means of two variable Walsh-Fourier series with variable parameters. (English)Zbl 1479.42073 Ukr. Math. J. 73, No. 3, 337-358 (2021) and Ukr. Mat. Zh. 73, No. 3, 291-307 (2021). Full Text: Full Text: ### On the rate of convergence of fully connected deep neural network regression estimates. (English)Zbl 1486.62112 MSC:  62G08 62G20 68T07 Full Text: Full Text: ### Approximation properties of Vallée Poussin means for special series of ultraspherical Jacobi polynomials. (English)Zbl 1484.33015 Kusraev, Anatoly G. (ed.) et al., Operator theory and differential equations. Selected papers based on the presentations at the 15th conference on order analysis and related problems of mathematical modeling, Vladikavkaz, Russia, July 15–20, 2019. Cham: Birkhäuser. Trends Math., 107-120 (2021). MSC:  33C45 41A25 42C10 Full Text: Full Text: Full Text: Full Text: all top 5 all top 5 all top 5 all top 3 all top 3
proofpile-shard-0030-178
{ "provenance": "003.jsonl.gz:179" }
# What are the services of Network Security in Computer Network? Computer NetworkInternetMCA Computer network security consists of measures taken by a business or some organizations to monitor and prevent unauthorised access from the outside attackers. Different approaches to computer network security management have different requirements depending on the size of the computer network. For example, a home office requires basic network security, while large businesses require high maintenance to prevent the network from malicious attacks. ## Network Security Services There are various services of network security which are as follows − ### Message Confidentiality Message confidentiality or privacy means that the sender and the receiver expect confidentiality. The transmitted message must make sense to only the intended receiver. To all others, the message must be garbage. When a customer communicates with her bank, she expects that the communication is totally confidential. ### Message Integrity Message integrity means that the data must arrive at the receiver exactly as they were sent. There must be no changes during the transmission, neither accidentally nor maliciously. As more and more monetary exchanges occur over the Internet, integrity is crucial. For example, it would be disastrous if a request for transferring $100 changed to a request for$10,000 or \$100,000. The integrity of the message must be preserved in secure communication. ### Message Authentication Message authentication is a service beyond message integrity. In message authentication, the receiver needs to be sure of the sender's identity and that an imposter has not sent the message. ### Message Nonrepudiation Message nonrepudiation means that a sender must not be able to deny sending a message that he or she, in fact, did send. The burden of proof falls on the receiver. For example, when a customer sends a message to transfer money from one account to another, the bank must have proof that the customer actually requested this transaction. ### Entity Authentication In entity authentication (or user identification), the entity or user is verified prior to access to the system resources (files, for example). For example, a student who needs to access her university resources needs to be authenticated during the logging process. This is to protect the interests of the university and the student. Published on 05-May-2021 10:22:38
proofpile-shard-0030-179
{ "provenance": "003.jsonl.gz:180" }
# Can you explain this 'Theory of Everything' formula? Tags: 1. Jun 5, 2015 ### Yohanes Nuwara I recently come across with an amazing equation of Theory of Everything; I wonder if TOE has been formulated (???) I found this equation on a website, check it out http://www.preposterousuniverse.com...world-of-everyday-experience-in-one-equation/. While seeing briefly this equation, I simply don't understand what this means because there are no explanations of all the units used in this formula (e.g. ψ, Φ, i, A, α, etc). What is W? Can you explain what the units are and what all of these mean? Does this formula be the real formula of TOE? Thank you :) 2. Jun 5, 2015 ### The_Duck This equation is more a "theory of everything we know so far." It's a summary of the standard model of particle physics and general relativity. Unfortunately understanding any given term in the equation requires a lot of background knowledge. Here's the general idea: $A$ and $F$ are related to the electric and magnetic fields and their analogs for the weak and strong force. $\Phi$ is the Higgs field. $g$ is the "metric of spacetime" which you can think of as the gravitational field. $R$ is a measure of the strength of the gravitational field. $\psi$ represents all matter particles such as electrons and quarks. $V_{ij}$ is kind of a table of particle masses. $W$ is the "partition function" which is a tool for calculating things in quantum mechanics. When people talk about a "theory of everything" they want something that goes beyond what is summarized in this equation: something that resolves certain problems with quantum mechanics and general relativity and explains where the particles and forces that we know about come from. Last edited: Jun 5, 2015
proofpile-shard-0030-180
{ "provenance": "003.jsonl.gz:181" }
# How to evaluate integral using complex analysis How can I use complex analysis to find the following integral: $$\int_{0}^{\infty} \frac{\log x \ dx }{{(1+x^3) }^2}$$ Can you suggest a proper contour and hints to tackle further integration. • This page is what you are looking for en.wikipedia.org/wiki/… – b00n heT Nov 23 '16 at 14:36 • the approach i used here will work:math.stackexchange.com/questions/1859034/… – tired Nov 23 '16 at 14:40 • by the way i would recommend a beta function apporach instead of contour integration. seems to be less cumbersome – tired Nov 23 '16 at 15:01 The contour you can take is this Hints for the calculations Your integral is of the form $$\int_0^{+\infty} R(x)\log(x)\ \text{d}x$$ We choose the contour integration $\Gamma(R, \epsilon)$ as showed above, and we notice that $$\lim_{x\to \infty} xR(x) = 0$$ $R(x)$ has no poles for $x\geq 0$ so we can proceed. Also, our path is chosen with $0 < \theta < 2\pi$ for $\arg(z)$. $$\log(z) = \log|z| + i\theta ~~~~~~~~~~~~~ \theta = \arg(z)$$ To solve it, we consider the integral of $R(x)\log^2(x)$ instead. Along that path, $\log^2(z)$ has no singularity inside $\Gamma(R, \epsilon)$ and also $$zR(z) \log^2(z) \to 0 ~~~~~~~ |z| \to \infty$$ Because the degree of $R(z)$ is at least greater than $2$ "points" with respect to the numerator. Also $$zR(z)\log^2(z) \to 0 ~~~~~~~ z\to 0$$ So we have: $$2\pi i\ \sum \ \text{Res}\ (R(z)\log^2(z)) = \left(\int_{L^+} + \int_{L^-}\right) R(z)\log^2(z)\ \text{d}z$$ On $L^+$ we have $z = x e^{i0^+}$ hence $$\log z = \log z + i0^+ = \log x$$ and on $L^-$ we have $$\log z = \log x + i2\pi$$ So $$\left(\int_{L^+} + \int_{L^-}\right) R(z)\log^2(z)\ \text{d}z = \int_0^{+\infty} R(x)\left[\log^2 x - (\log x - 2\pi i)^2\right]\ \text{d}x$$ $$= 4\pi^2\int_0^{+\infty} R(x)\ \text{d}x - 4\pi i\int_0^{+\infty} R(x)\log(x)\ \text{d}x$$ Hence we end up with the important formula $$\int_0^{+\infty} R(x)\log(x)\ \text{d}x = -\frac{1}{2}\Re\left[\sum \ \text{Res}\ (R(z)\log^2(z))\right]$$ So what you need to do is just to compute the residues of $$\frac{\log^2(z)}{(1+z^3)^2}$$ Hints for the residues Notice that $$1+z^3 = (z+1)(z^2-z+1)$$ Poles are $$z_0 = e^{\pi i} = -1$$ $$z_1 = e^{i\pi/3}$$ $$z_2 = e^{5i\pi/3}$$ Of you may prefer to calculate the roots of $z^2-z+1$, it's the same. Notice that all your residues are of order two, because thou have $$(1+z^3)^2 \longrightarrow ((z+1)(z^2-z+1))^2 = (z-1)^2(z^2-z+1)^2$$ Evaluate them, follow the rule above, take the real part and you will get the result $$\boxed{-\frac{4\pi^2}{81} - \frac{2\sqrt{3}\pi}{27}}$$ • Thanks a ton! Even though I nearly reached this, I was stuck up finding the value of $4\pi^2\int_0^{+\infty} R(x)\ \text{d}x$.I am trying again to figure out how it turns to be 0. – Chaitanya Mukka Nov 24 '16 at 14:23 • Oh! I got it. As $4\pi^2\int_0^{+\infty} R(x)\ \text{d}x$ when divided by $4\pi i$ gives me a imaginary number, hence doesn't contribute to the result. That solves the problem! – Chaitanya Mukka Nov 24 '16 at 14:38 Hint: Choose the contour as: $C=[-R,-r]\cup[r, R]\cup C_R\cup \gamma_r$, where $C_R$ is the upper half circle with radius of $R$ and $\gamma_r$ is the small upper half circle with radius of $r<1$ bypassing $0$. Prove that on $C_R$ and $\gamma_r$ integral approaches $0$ as $R\to \infty$ and $r\to0$. Note that on $[-R,-r], \: \log(-x)=\log x+\pi i$.
proofpile-shard-0030-181
{ "provenance": "003.jsonl.gz:182" }
The Museum of HP Calculators HP Articles Forum How to use a formula in a post Posted by Thomas Klemm on 8 Nov 2010, 4:11 p.m. Let's assume you want to use a formula in your post: $e^{i\pi}+1=0$ How can you do that? 1. Use the equation editor to enter the formula. 2. As a result you get the formula written in LaTeX: e^{i\pi}+1=0 3. Choose URL Encoded at the bottom and copy it: http://latex.codecogs.com/gif.latex?e%5E%7Bi%5Cpi%7D&plus;1%3D0 4. Insert the result in your post within an image-tag. [img:http://latex.codecogs.com/gif.latex?e%5E%7Bi%5Cpi%7D&plus;1%3D0] Egan Ford posted the following: If you are too lazy to upload images you can use aamath for "a-text": $echo "e^(i*pi)+1=0" | aamath __ i || e + 1 = 0$ echo "sum([n-1;r],r = 0 .. min(n-1,4))" | aamath min(n - 1, 4) ===== \ / n - 1 \ > | | / \ r / ===== r = 0 Don Shepherd noted: Quote: Yeah, the MoHPC does have many of those symbols here: http://www.hpmuseum.org/software/ arrowl.gif delta.gif deltap.gif diamond.gif divide.gif dwnarrow.gif dwnquest.gif exch.gif exchi.gif gto.gif integral.gif interg.gif lesseq.gif meanx.gif noteq.gif notequal.gif pi.gif pict0.jpg rdelta.gif rolldn.gif rollup.gif rtbldarw.gif sigma.gif sqrt.gif stee.gif sum.gif summi.gif sumpl.gif symangle.gif symdel.gif symdelc.gif symgamma.gif symint.gif symnoteq.gif symphic.gif sympi.gif sympic.gif symsqrt.gif symsum.gif symsumm.gif symsump.gif symsums.gif symtheta.gif symthetc.gif uline.gif uparrow.gif yhat.gif Didier Lachieze proposed: One way could be to use GIF/PNG images such as the ones HERE. For example: Edited: 18 Oct 2012, 4:43 p.m.
proofpile-shard-0030-182
{ "provenance": "003.jsonl.gz:183" }
# Combining like terms with distribution and negative numbers 1,860pages on this wiki Combining like terms with distribution and negative numbers Description Exercise Name: Combining like terms with distribution and negative numbers Math Missions: 7th grade (U.S.) Math Mission, Algebra basics Math Mission, Mathematics I Math Mission, Algebra I Math Mission, Mathematics II Math Mission Types of Problems: 3 The Combining like terms with distribution and negative numbers exercise falls under the 7th grade (U.S.) Math Mission, Algebra basics Math Mission, Mathematics I Math Mission, Algebra I Math Mission and Mathematics II Math Mission. The objective of this exercise is that the distributive property can be used with variables too. ## Types of problems There are three types of problems in this exercise: 1. Simplify the following expression - This kind of problem has an expression and the user is required to simplify it using the distributive property. ## Strategies 1. It is hard to get the speed badges and the accuracy badges in this skill. Using the distributive property on the expression may take some time and combining the like terms and simplifying further may take time too. 1. Suppose one has ${a(b+c)}$, they must multiply the ${\text{a(the term outside)}}$ by the two terms inside, getting ${ab+ac}$. ## Real-life Applications 1. Combining like terms is one of the most fundamental simplifications procedures. Virtually all equations employ some kind of collecting like terms. 2. Combining like terms can be viewed as a justification for getting common denominators to add a (although circular). The different denominators are different "terms." 3. Knowledge of algebra is essential for higher math levels like trigonometry and calculus. Algebra also has countless applications in the real world.
proofpile-shard-0030-183
{ "provenance": "003.jsonl.gz:184" }
1 In a certain town, the probability that it will rain in the afternoon is known to be $0.6$. Moreover, meteorological data indicates that if the temperature at noon is less than or equal to $25°C$, the probability that it will rain in the afternoon is $0.4$. The temperature at noon is ... will rain in the afternoon on a day when the temperature at noon is above $25°C$? $0.4$ $0.6$ $0.8$ $0.9$ 1 vote 2 3 Consider the following statements: 1. Let T be the DFS tree resulting from DFS traversal on a connected directed graph the root of the tree is an articulation point, iff it has at least two children. 2. When BFS is carried out on a directed graph G, the edges of G will ... as tree edge, back edge, or cross edge and not forward edge as in the case of DFS. Find TRUE or FALSE for both the statements 4 A push down automation (pda) is given in the following extended notation of finite state diagram: The nodes denote the states while the edges denote the moves of the pda. The edge labels are of the form $d$, $s/s'$ where $d$ is the input symbol read and $s, s'$ are the stack ... states in the above notation that accept the language $\left\{0^{n}1^{m} \mid n \leq m \leq 2n\right\}$ by empty stack 5 Consider the set $\{a, b, c\}$ with binary operators $+$ and $*$ defined as follows: ... $(b * x) + (c * y) = c$ The number of solution(s) (i.e., pair(s) $(x, y)$ that satisfy the equations) is $0$ $1$ $2$ $3$ 6 A logical binary relation $\odot$ ... to $A\wedge B$ ? $(\sim A\odot B)$ $\sim(A \odot \sim B)$ $\sim(\sim A\odot\sim B)$ $\sim(\sim A\odot B)$ 7 Fuzzy logic is used in artificial intelligence. In fuzzy logic, a proposition has a truth value that is a number between 0 and 1, inclusive.A proposition with a truth value of 0 is false and one with a truth value of 1 is true. Truth values that are between 0 ... nth statement is At least n of the statements in this list are false. Answer part (b) assuming that the list contains 99 statements 8 Which of the following statement(s) is/are correct? P: For a dynamic programming algorithm, computing all values in a bottom-up fashion is asymptotically faster than using recursion Q: The running time of a dynamic programming algorithm is always Θ(P) where P is the number of sub-problems.( Marks: -0.66 ) I mark only P is true. Answer neither P and Q 9 10 Maximum no of edges in a triangle-free, simple planar graph with 10 vertices 11 Let f (n) = Ο(n), g(n) = Ο(n) and h(n) = θ(n). Then [f (n) . g(n)] + h(n) is : a) Ο(n) b)θ(n) I think it must be 0(n) 12 Hi Guys, In SQL, <condition> ALL evaluates to TRUE if inner query returns no tuples. { X < ALL (empty) == TRUE } <condition> ANY evaluates to FALSE if inner query returns no tuples. { X < ANY (empty) == FALSE } But what is the logical reason behind this ? PS: ping @Krish__, @Anu007, @Ashwin Kulkarni @reena_kandari and @srestha ji. 1 vote 13 given : 1/4 and 1 1/4 = theta(1) is this correct or only this 1/4 = O(1) 1 vote 14 The number of function from set {1, 2, 3, 4, 5, 6, 7, 8} to set {0, 1} such that assign 1 to exactly one of given number less than 8 are ....................... 1 vote 15 How many different ways are there to seat four people around a circular table, where two seatings are considered the same when each person has the same left neighbor and the same right neighbor? ANSWER IS 6 OR 3 .???? 16 17 VIPT PIPT PIVT VIVT 1 vote 18 why we do indexing and tagging in cache ?? 19 When the sum of all possible two digit numbers formed from three different one digit natural numbers are divided by sum of the original three numbers, the result is $26$ $24$ $20$ $22$ 20 21 Why do we need a "trusted third party" between a client and a receiver when sending a message with a digital signature? I mean what are the consequences if we don't do that? 22 An $n \times n$ matrix $M$ with real entries is said to be positive definite if for every non-zero $n$-dimensional vector $x$ with real entries, we have $x^{T}Mx>0.$ Let $A$ and $B$ be symmetric, positive definite matrices of size $n\times n$ with real entries. ... $(2)$ Only $(3)$ Only $(1)$ and $(3)$ None of the above matrices are positive definite All of the above matrices are positive definite 23 Consider the following subset of $\mathbb{R} ^{3}$ (the first two are cylinder, the third is a plane): $C_{1}=\left \{ \left ( x,y,z \right ): y^{2}+z^{2}\leq 1 \right \};$ ... Let $A = C_{1}\cap C_{2}\cap H.$ Which of the following best describe the shape of set $A?$ Circle Ellipse Triangle Square An octagonal convex figure with curved sides 24 Consider a point $A$ inside a circle $C$ that is at distance $9$ from the centre of a circle. Suppose you told that there is a chord of length $24$ passing through $A$ with $A$ as its midpoint. How many distinct chords of $C$ have integer length and pass through $A?$ $2$ $6$ $7$ $12$ $14$ 25 Which of the following language generated by given grammar? 1) L = {w : na(w) and nb(w) both are even} 2) L = {w : na(w) and nb(w) both are odd} 3) L = {w : na(w) or nb(w) are even} 4) L = {w : na(w) or nb(w) are odd} 26 Convert following infix to prefix expression e^d-a*b^f/g+h*c/i+j-k Explain each step 27 The second moment of a Poisson-distributed random variable is 2. The mean of the variable is .... My question on solving we get 2 values of lamda(ie mean) .One is -2 and the other is 1 .So which one to choose? 28 An array $X$ of n distinct integers is interpreted as a complete binary tree. The index of the first element of the array is $0$. If the root node is at level $0$, the level of element $X[i]$, $i \neq 0$, is $\left \lfloor \log _2 i \right \rfloor$ $\left \lceil \log _2 (i+1)\right \rceil$ $\left \lfloor \log _2 (i+1) \right \rfloor$ $\left \lceil \log _2 i \right \rceil$ 29 Consider a function F from set A to B having A={1,2,...n} and B={1,2,....m} Find number's of f in F where f is defined as : 1. f(i)<=f(j) and 1<=i<=j<=n 2.f(i)< f(j) and 1<=i<=j<=n 3. f(i) >=f(j) and 1<=i<=j<=n 4. f(i) > f(j) and 1<=i<=j<=n.
proofpile-shard-0030-184
{ "provenance": "003.jsonl.gz:185" }
# Connexions You are here: Home » Content » The FHP Lattice Gas Cellular Automaton for Simulating Fluid Flows ### Lenses What is a lens? #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. #### Affiliated with (What does "Affiliated with" mean?) This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization. • Rice Digital Scholarship This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "The Art of the PFUG" Click the "Rice Digital Scholarship" link to see all content affiliated with them. #### Also in these lenses • Lens for Engineering This module is included inLens: Lens for Engineering By: Sidney Burrus Click the "Lens for Engineering" link to see all content selected in this lens. ### Recently Viewed This feature requires Javascript to be enabled. # The FHP Lattice Gas Cellular Automaton for Simulating Fluid Flows Module by: Anthony Austin. E-mail the author Summary: This report summarizes work done as part of the Physics of Strings PFUG under Rice University's VIGRE program. VIGRE is a program of Vertically Integrated Grants for Research and Education in the Mathematical Sciences under the direction of the National Science Foundation. A PFUG is a group of Postdocs, Faculty, Undergraduates and Graduate students formed round the study of a common problem. This module discusses the use of the FHP lattice gas cellular automaton for simulating fluid flows based on the description in (CITE). Code for simulating flow past an obstacle in a rectangular channel with periodic boundary conditions at the inflow and outflow edges is provided. ## Introduction Solving fluid flows is an important everyday task for engineers, physicists, and applied mathematicians. It can also be rather complex, especially when simulating flows with high Reynolds numbers. The classic approach to tackling this problem is to solve the Navier-Stokes equations directly using, for example, finite element or finite difference methods, but for domains with complicated geometries, it can be difficult to find and implement a suitable mesh over which to apply these techniques. In cases like these, it may be more desirable to accurately simulate the behavior of the flow instead of solving for it outright. To this end, scientists and mathematicians have devised lattice gas and lattice Boltzmann methods for modeling fluid flow. In our VIGRE seminar this semester, we implemented a basic 2-D version of the FHP lattice gas cellular automaton (LGCA) with an eye towards extending this model to simulate string motion in fluid flow. ## Brief Description of the FHP Model The LGCA approach to simulating fluid flow involves defining a lattice on which a large number of simulated particles move. Each particle has a mass and a velocity and therefore a momentum. The rules for particle interaction are chosen so that, in the macroscopic limit (i.e., when the momentum vectors are averaged over relatively large subdomains of the entire lattice), the resulting flow obeys the Navier-Stokes equations. For the basic FHP model in 2-D, the lattice is chosen to have hexagonal symmetry. This means that a particle in the FHP LGCA can move from one node to another with six possible lattice velocities. (These velocities are equal in magnitude but vary with direction. So-called "multi-speed" FHP models allow for even more possibilities, but we do not discuss these here.) The reason for the hexagonal symmetry is that it has been shown that certain key tensors fail to be isotropic on any lattice with a lesser degree of symmetry (e.g., a cubic lattice), which prevents the model from yielding the Navier-Stokes equations in the macroscopic limit ([1], pgs. 38, 51) When particles meet at a node, it is possible for a collision to occur. The basic FHP model defines only two- and three-particle collisions, but it turns out that this is sufficient to yield the desired behavior. (More sophisticated models may define more complicated collision interactions.) If two particles travelling in opposite directions meet at a node, then the particle pair is randomly rotated either clockwise or counterclockwise by sixty degrees. If three particles meet at a node in a symmetric configuration, then they collide in such a way that this configuration is “inverted." Perhaps an illustration will help clarify: For a more detailed description of the FHP model as well as descriptions of some other possible LGCAs, see [1]. ## Performing a Simulation with the FHP Model Simulations that use this model proceed in essentially three steps. First, the simulation must be initialized with several parameters, such as the size and shape of the domain, number of timesteps, and the initial placement and velocity of the particles. In the code we provide below, we use a simple rectangular-shaped domain with walls on the upper and lower edges to simulate flow in a channel. We place a single particle at every node in the domain (except at those on walls and other flow obstacles) moving in the rightward direction. The simulation must also have some way of knowing where the obstacles (e.g., walls) to the fluid flow are located and how the fluid ought to behave when it encounters those obstacles. That is, the boundaries of the domain must be defined, and boundary conditions must be provided. There are several possible types of boundary conditions that can be used; however, the most frequently used boundary conditions are of the no-slip type, and these are the boundary conditions used in the code below. To implement these, one simply requires that when a particle enters a node at a channel wall or obstacle boundary, it is reflected back in the direction in which it came (i.e., its velocity vector is turned around by 180 degrees). For simplicity, at the left and right edges of our channel (where there are no walls), we have implemented periodic boundary conditions. That is, we have “linked" the left and right sides of the channel together. (In reality, this means that we are no longer simulating flow through a rectangular channel, but rather through a channel of a circular shape, but by choosing the channel dimensions to be sufficiently large, the effects of this on our simulation can be made negligible.) It is possible to implement true inflow and outflow boundary conditions, but these (especially the latter) can be quite complicated. Next, once the simulation is initialized, it can be carried out. Performing the actual simulation consists of carrying out the appropriate collisions at each node and the propagating (or streaming) the particles to their next nodes in the lattice. This process is repeated for each timestep until all the runs are completed. Finally, to obtain the actual flow from the simulation, the particle velocities are averaged over large subdomains. Subdomain size can vary, but due to the high susceptability of LGCAs to statistical noise (see “Notes on Performance") section, below and [1], pg. 157), larger subdomains will yield more accurate, albeit less detailed, results. ## Results To test our code, we simulated flows past two different types of obstacles – a flat plate and a circular cylinder – and also examined the effects of choosing different subdomain sizes over which to average. All simulations were carried out on a grid 640 nodes by 256 nodes in dimension with 2000 timesteps. The resulting flow fields were: ## Notes on Performance Though lattice gas models are convenient, they suffer from several drawbacks, the most notable of which is statistical noise. As the figures in the previous section illustrate, there is a significant tradeoff between flow field accuracy and model detail for a fixed domain size. As with any type of simulation, it is possible to get better, more accurate and detailed results by simply enlarging the domain and taking a greater number of timesteps. Unfortunately, this means a larger consumption of computing resources. Generating each of the above figures using the code below took between three and four hours of time on a Dell Latitude D410 laptop running MATLAB on Windows XP with 1 GB RAM and a 2 GHz Intel processor. To enlarge the domain by 10 times in each dimension and take 10 times as many timesteps, as is done in generating the figures on pages 83-84 of [1], would require a enormous amount of memory and runtime. To make the simulation usable, it will be necessary in the future to improve the simulation algorithm (e.g., by implementing it on a bitwise level as discussed on pg. 42 of [1]). Additionally, this model lends itself easily to parallelization, providing another possibility for performance improvement. ## Conclusion We have successfully implemented the basic version of the FHP lattice gas cellular automaton for simulating fluid behavior. Though our present implementation is a bit inefficient, it appears to give the expected results, and we know ways that it may be improved. Assembling this model has allowed us to take a step in the direction of our ultimate goal is of investigating the motion of vibrating strings in fluids. ## Acknowledgements This Connexions module describes work conducted as part of Rice University's VIGRE program, supported by National Science Foundation grant DMS–0739420. ## Appendix: FHP LGCA Code % % fhp_.m -- Uses the FHP LGCA model to simulate the flow of a fluid % past a plate in a wide channel with no-slip % boundary conditions. This code aims to implement the FHP % LGCA as described in_ Lattice Gas Cellular Automata and % Lattice Boltzmann Models_ by Wolf-Gladrow. Periodic % boundary conditions are assumed at the channel's left % and right edges. % % WRITTEN BY: Anthony P. Austin, February 11, 2009 function fhp tic; % Time program exectution. % Number of nodes in each direction. These must be multiples of 32 % for the coarse graining to work. numnodes_x = 6400/10; numnodes_y = 2560/10; % Number of timesteps over which to simulate. t_end = 5; % 3D array of nodes to store the vectors that represent the occupied % cells at each node. % 0 - Cell unoccupied. % 1 - Cell occupied. % % 1st Index -- Node x-coordinate. % 2nd Index -- Node y-coordinate. % 3rd Index -- Cell number % % The elements of the occupancy vectors correspond to the cells in the % following way: % % 3 2 % \ / % 4 - O - 1 % / \ % 5 6 % % Observe that this convention differs slightly from Wolf-Gladrow's. % nodes = zeros(numnodes_x, numnodes_y, 6); % Define the lattice velocities. c1 = [1; 0]; c2 = [cos(pi/3); sin(pi/3)]; c3 = [cos(2*pi/3); sin(2*pi/3)]; c4 = [-1; 0]; c5 = [cos(4*pi/3); sin(4*pi/3)]; c6 = [cos(5*pi/3); sin(5*pi/3)]; % Define a matrix that indicates where the flow obstacles are. % 0 - No obstacle present at that node. % 1 - Obstacle at the node. % % Don't forget to put 1's at the interior points, too! obstacle = zeros(numnodes_x, numnodes_y); % Insert a flat plate as the obstacle. for (j = 880/10:1:1680/10) obstacle(1280/10, j) = 1; end % Insert a circular cylinder as the obstacle. %{ theta = 0:0.001:2*pi; xc = round(168 + 40*cos(theta)); yc = round(128 + 40*sin(theta)); for (i = 1:1:length(theta)) obstacle(xc(i), yc(i)) = 1; end for (i = 1:1:numnodes_x) currrow = obstacle(i, :); n = find(currrow, 1, 'first'); m = find(currrow, 1, 'last'); if (~isempty(n)) for (j = n:1:m) obstacle(i, j) = 1; end end end %} % Set up the simulation. for (i = 1:1:numnodes_x) for (j = 2:1:(numnodes_y - 1)) % Don't include the top and bottom walls. % Skip points on the obstacle boundary if (obstacle(i, j) ~= 1) curr_cell = nodes(i, j, :); % Get the cell for the current node. curr_cell(1) = 1; % Put a particle in the cell flowing in the % rightward direction. nodes(i, j, :) = curr_cell; % Reinsert the cell into the array. end end end % Carry out the simulation. for (t = 1:1:t_end) % Carry out collisions at non-boundary nodes. for (i = 1:1:numnodes_x) for (j = 2:1:(numnodes_y - 1)) % Don't include the top and bottom walls. % Ensure that there's no obstacle in the way. if (obstacle(i, j) ~= 1) % Extract the current cell. cell_oc = nodes(i, j, :); % Determine how many particles are in the cell. numparts = sum(cell_oc); % Determine and execute appropriate collision. if ((numparts ~= 2) && (numparts ~= 3)) % No collision occurs. nodes(i, j, :) = cell_oc; elseif (numparts == 3) % Three-particle collisions. % We require a symmetric configuration. if ((cell_oc(1) == cell_oc(3)) && (cell_oc(3) == cell_oc(5))) % Invert the cell contents. nodes(i, j, :) = ~cell_oc; else nodes(i, j, :) = cell_oc; end else % Two-particle collisions. % Find the cell of one of the particles. p1 = find(cell_oc, 1); % We need its diametric opposite to be occupied as well. if ((p1 > 3) || (cell_oc(p1 + 3) ~= 1)) nodes(i, j, :) = cell_oc; else % Randomly rotate the particle pair clockwise or % counterclockwise. r = rand; if (r < 0.5) % Counterclockwise. n_cell_oc(1) = cell_oc(6); n_cell_oc(2) = cell_oc(1); n_cell_oc(3) = cell_oc(2); n_cell_oc(4) = cell_oc(3); n_cell_oc(5) = cell_oc(4); n_cell_oc(6) = cell_oc(5); else % Clockwise. n_cell_oc(1) = cell_oc(2); n_cell_oc(2) = cell_oc(3); n_cell_oc(3) = cell_oc(4); n_cell_oc(4) = cell_oc(5); n_cell_oc(5) = cell_oc(6); n_cell_oc(6) = cell_oc(1); end nodes(i, j, :) = n_cell_oc; end end end end end % Carry out collisions along the top and bottom walls (no-slip). for (i = 1:1:numnodes_x) nodes(i, 1, :) = [nodes(i, 1, 4) nodes(i, 1, 5) nodes(i, 1, 6) nodes(i, 1, 1) nodes(i, 1, 2) nodes(i, 1, 3)]; nodes(i, numnodes_y, :) = [nodes(i, numnodes_y, 4) nodes(i, numnodes_y, 5) nodes(i, numnodes_y, 6) nodes(i, numnodes_y, 1) nodes(i, numnodes_y, 2) nodes(i, numnodes_y, 3)]; end % Carry out collisions at obstacle points (no-slip). for (i = 1:1:numnodes_x) for (j = 1:1:numnodes_y) if (obstacle(i, j) == 1) nodes(i, j, :) = [nodes(i, j, 4) nodes(i, j, 5) nodes(i, j, 6) nodes(i, j, 1) nodes(i, j, 2) nodes(i, j, 3)]; end end end % Create a new lattice which will hold the state of the current % lattice after propagation. n_nodes = zeros(numnodes_x, numnodes_y, 6); % Iterate over all the nodes, propagating the particles as we go. for (i = 1:1:numnodes_x) for(j = 1:1:numnodes_y) % Get the occupancy state of the current node. cell_oc = nodes(i, j, :); % Coordinates of the neighbor node. neighbor_x = 0; neighbor_y = 0; % Propagation in the 1-direction. neighbor_y = j; if (i == numnodes_x) neighbor_x = 1; else neighbor_x = i + 1; end n_cell_oc = n_nodes(neighbor_x, neighbor_y, :); n_cell_oc(1) = cell_oc(1); n_nodes(neighbor_x, neighbor_y, :) = n_cell_oc; % Propagation in the 2-direction. if (j ~= numnodes_y) neighbor_y = j + 1; if (mod(j, 2) == 0) if (i == numnodes_x) neighbor_x = 1; else neighbor_x = i + 1; end else neighbor_x = i; end n_cell_oc = n_nodes(neighbor_x, neighbor_y, :); n_cell_oc(2) = cell_oc(2); n_nodes(neighbor_x, neighbor_y, :) = n_cell_oc; end % Propagation in the 3-direction. if (j ~= numnodes_y) neighbor_y = j + 1; if (mod(j, 2) == 1) if (i == 1) neighbor_x = numnodes_x; else neighbor_x = i - 1; end else neighbor_x = i; end n_cell_oc = n_nodes(neighbor_x, neighbor_y, :); n_cell_oc(3) = cell_oc(3); n_nodes(neighbor_x, neighbor_y, :) = n_cell_oc; end % Propagation in the 4-direction. neighbor_y = j; if (i == 1) neighbor_x = numnodes_x; else neighbor_x = i - 1; end n_cell_oc = n_nodes(neighbor_x, neighbor_y, :); n_cell_oc(4) = cell_oc(4); n_nodes(neighbor_x, neighbor_y, :) = n_cell_oc; % Propagation in the 5-direction. if (j ~= 1) neighbor_y = j - 1; if (mod(j, 2) == 1) if (i == 1) neighbor_x = numnodes_x; else neighbor_x = i - 1; end else neighbor_x = i; end n_cell_oc = n_nodes(neighbor_x, neighbor_y, :); n_cell_oc(5) = cell_oc(5); n_nodes(neighbor_x, neighbor_y, :) = n_cell_oc; end % Propagation in the 6-direction. if (j ~= 1) neighbor_y = j - 1; if (mod(j, 2) == 0) if (i == numnodes_x) neighbor_x = 1; else neighbor_x = i + 1; end else neighbor_x = i; end n_cell_oc = n_nodes(neighbor_x, neighbor_y, :); n_cell_oc(6) = cell_oc(6); n_nodes(neighbor_x, neighbor_y, :) = n_cell_oc; end end end % Propagate the particles to their next nodes. nodes = n_nodes; % Print the current time step every so often so we know that the % program hasn't frozen or crashed. if (mod(t, 5) == 0) disp(t); end end % Subdivide the total domain into subdomains of size 32x32 for the % purposes of coarse-graining. See pg. 51. grain_size = 8; grain_x = numnodes_x / grain_size; grain_y = numnodes_y / grain_size; % Pre-allocate vectors for the averaged velocities. av_vel_x_coords = zeros(1, grain_x * grain_y); av_vel_y_coords = zeros(1, grain_x * grain_y); av_vel_x_comps = zeros(1, grain_x * grain_y); av_vel_y_comps = zeros(1, grain_x * grain_y); % Iterate over the entire domain, averaging and storing the results as % we go. currval = 1; for (i = 1:1:grain_x) % Calculate the lower and upper x-boundaries. x_bd_l = (i - 1)*grain_size + 1; x_bd_u = i*grain_size; for (j = 1:1:grain_y) % Calculate the lower and upper y-boundaries. y_bd_l = (j - 1)*grain_size + 1; y_bd_u = j*grain_size; % Get the number of particles moving in each direction in the % current subdomain. np = zeros(1, 6); np(1) = sum(sum(nodes(x_bd_l:1:x_bd_u, y_bd_l:1:y_bd_u, 1))); np(2) = sum(sum(nodes(x_bd_l:1:x_bd_u, y_bd_l:1:y_bd_u, 2))); np(3) = sum(sum(nodes(x_bd_l:1:x_bd_u, y_bd_l:1:y_bd_u, 3))); np(4) = sum(sum(nodes(x_bd_l:1:x_bd_u, y_bd_l:1:y_bd_u, 4))); np(5) = sum(sum(nodes(x_bd_l:1:x_bd_u, y_bd_l:1:y_bd_u, 5))); np(6) = sum(sum(nodes(x_bd_l:1:x_bd_u, y_bd_l:1:y_bd_u, 6))); % Compute the average velocity. av_vel = (1/(grain_size.^2))*(np(1)*c1 + np(2)*c2 + np(3)*c3 + np(4)*c4 + np(5)*c5 + np(6)*c6); % Store the velocity components. av_vel_x_comps(currval) = av_vel(1); av_vel_y_comps(currval) = av_vel(2); % Store the positional coordinates. av_vel_x_coords(currval) = i; av_vel_y_coords(currval) = j; currval = currval + 1; end end % Plot the average velocity field. quiver(av_vel_x_coords, av_vel_y_coords, av_vel_x_comps, av_vel_y_comps); % Plot the channel boundaries. hold on; plot([1; grain_x], [0.75; 0.75], 'k-'); hold on; plot([1; grain_x], [grain_y + 0.25; grain_y + .25], 'k-'); % Display the flow obstacle. obstacle_x = zeros(1, nnz(obstacle)); obstacle_y = zeros(1, nnz(obstacle)); k = 1; for (i = 1:1:numnodes_x) for (j = 1:1:numnodes_y) if (obstacle(i, j) == 1) obstacle_x(k) = 0.5 + (numnodes_x ./ (grain_size .* (numnodes_x - 1))) .* (i - 1); obstacle_y(k) = 0.5 + (numnodes_y ./ (grain_size .* (numnodes_y - 1))) .* (j - 1); k = k + 1; end end end hold on; plot(obstacle_x, obstacle_y, 'r-'); axis equal; toc; % Print the time it took to execute. end ## References 1. Wolf-Gladrow, Dieter A. (2005). Lattice-Gas Cellular Automata and Lattice Boltzmann Models – An Introduction. Berlin: Springer. ## Content actions PDF | EPUB (?) ### What is an EPUB file? EPUB is an electronic book format that can be read on a variety of mobile devices. My Favorites (?) 'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'. | A lens I own (?) #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. | External bookmarks
proofpile-shard-0030-185
{ "provenance": "003.jsonl.gz:186" }
# On (a generalization of) the Gauss Circle Problem Most (if not all) references I read about the Gauss Circle Problem that proves a bound below $O(R^{2/3})$ reduces the GCP to the Dirichlet Divisor Problem by the well known expression of $r_2(n)$, the number of ways of writing a natural number $n$ as the sum of two squares. My question is then, what happens if the circle is not centered at the origin (or any lattice point)? In this case it seems that there is no number theoretical exact formula to reduce one problem to the other, and we have to tackle the GCP directly. What are the recent results in this direction? Addendum: I vaguely recall reading somewhere about the same problem with the circle replaced by a uniformly convex planar domain, in which case the exponent is not as good, but still better than $O(R^{2/3})$. But now I'm unable to find any reference to it, though, so I added a "reference request" tag. • I don't think the GCP is reduced the the DDP. Instead, the GCP is equivalent to a problem quite similar to the DDP. The reason is simple and you mention it as well: $r_2(n)$ is a divisor sum. – GH from MO Dec 29 '15 at 20:56 • @GHfromMO Maybe I didn't read in detail, but if the center of the circle is not a lattice point it is not that easy to relate it to divisor sums, right? – Fan Zheng Dec 29 '15 at 22:08 • Consider rotating the offset circle around the lattice point closest to its center (or look at lattice centered concentric inscribed and circumscribed circles). My feeling is that the offset problem differs from the original problem by a divisor sum and an amount that is easily bounded. Gerhard "Or I'm Talking In Circles" Paseman, 2015.12.29 – Gerhard Paseman Dec 29 '15 at 22:19 • Related MO question: "Lattice points on the boundary of an ellipse", whose answer cites Bombieri-Pila. – Joseph O'Rourke Dec 30 '15 at 2:39 Indeed Huxley has considered a more general problem and obtained analogs of what was known for the usual divisor problem. Huxley considers a closed convex curve $C$ enclosing an area $A$, and the dilate $MC$ of $C$ by a factor $M$. Place this dilate in any way you like (translation or rotation) on the coordinate plane, and count the lattice points inside it. Then under suitable regularity assumptions on the boundary curve $C$, Huxley obtains estimates for the difference between the number of lattice points and the expected number $AM^2$ where $A$ is the area enclosed by $C$. His bounds depend on the original shape $C$, but not on the embedding of $MC$ in the plane. The results are in three papers by him Exponential Sums and Lattice points, I, 2, and 3 (developing a method of Bombieri, Iwaniec and Mozzochi) and also mentioned in a recent survey, see Huxley, which has further references. The strongest result for the translated circle of radius $R$ has an error term of $R^{0.6298\ldots}$ (see page 593 of the third paper).
proofpile-shard-0030-186
{ "provenance": "003.jsonl.gz:187" }
# Man1 - grep.1 ## NAME grep, egrep, fgrep - print lines that match patterns ## SYNOPSIS grep [/OPTION/. . .] PATTERNS [/FILE/. . .] grep [/OPTION/. . .] -e PATTERNS . . . [/FILE/. . .] grep [/OPTION/. . .] -f PATTERN_FILE . . . [/FILE/. . .] ## DESCRIPTION grep searches for PATTERNS in each FILE. PATTERNS is one or more patterns separated by newline characters, and grep prints each line that matches a pattern. Typically PATTERNS should be quoted when grep is used in a shell command. A FILE of “*-*” stands for standard input. If no FILE is given, recursive searches examine the working directory, and nonrecursive searches read standard input. In addition, the variant programs egrep and fgrep are the same as grep -E and grep -F, respectively. These variants are deprecated, but are provided for backward compatibility. ## OPTIONS ### Generic Program Information - -help Output a usage message and exit. -V, - -version Output the version number of grep and exit. ### Pattern Syntax -E, - -extended-regexp Interpret PATTERNS as extended regular expressions (EREs, see below). -F, - -fixed-strings Interpret PATTERNS as fixed strings, not regular expressions. -G, - -basic-regexp Interpret PATTERNS as basic regular expressions (BREs, see below). This is the default. -P, - -perl-regexp Interpret I<PATTERNS> as Perl-compatible regular expressions (PCREs). This option is experimental when combined with the -z (- -null-data) option, and grep -P may warn of unimplemented features. ### Matching Control -e*/ PATTERNS/, - -regexp=*/PATTERNS/ Use PATTERNS as the patterns. If this option is used multiple times or is combined with the -f (- -file) option, search for all patterns given. This option can be used to protect a pattern beginning with “-”. -f*/ FILE/, - -file=*/FILE/ Obtain patterns from FILE, one per line. If this option is used multiple times or is combined with the -e (- -regexp) option, search for all patterns given. The empty file contains zero patterns, and therefore matches nothing. -i, - -ignore-case Ignore case distinctions in patterns and input data, so that characters that differ only in case match each other. - -no-ignore-case Do not ignore case distinctions in patterns and input data. This is the default. This option is useful for passing to shell scripts that already use -i, to cancel its effects because the two options override each other. -v, - -invert-match Invert the sense of matching, to select non-matching lines. -w, - -word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. -x, - -line-regexp Select only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ^ and . -y Obsolete synonym for -i. ### General Output Control -c, - -count Suppress normal output; instead print a count of matching lines for each input file. With the -v, - -invert-match option (see below), count non-matching lines. - -color[*=/WHEN/*], - -colour[*=/WHEN/*] Surround the matched (non-empty) strings, matching lines, context lines, file names, line numbers, byte offsets, and separators (for fields and groups of context lines) with escape sequences to display them in color on the terminal. The colors are defined by the environment variable GREP_COLORS. The deprecated environment variable GREP_COLOR is still supported, but its setting does not have priority. WHEN is never, always, or auto. -L, - -files-without-match Suppress normal output; instead print the name of each input file from which no output would normally have been printed. -l, - -files-with-matches Suppress normal output; instead print the name of each input file from which output would normally have been printed. Scanning each input file stops upon first match. -m*/ NUM/, - -max-count=*/NUM/ Stop reading a file after NUM matching lines. If the input is standard input from a regular file, and NUM matching lines are output, grep ensures that the standard input is positioned to just after the last matching line before exiting, regardless of the presence of trailing context lines. This enables a calling process to resume a search. When grep stops after NUM matching lines, it outputs any trailing context lines. When the -c or - -count option is also used, grep does not output a count greater than NUM. When the -v or - -invert-match option is also used, grep stops after outputting NUM non-matching lines. -o, - -only-matching Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line. -q, - -quiet, - -silent Quiet; do not write anything to standard output. Exit immediately with zero status if any match is found, even if an error was detected. Also see the -s or - -no-messages option. -s, - -no-messages Suppress error messages about nonexistent or unreadable files. ### Output Line Prefix Control -b, - -byte-offset Print the 0-based byte offset within the input file before each line of output. If -o (- -only-matching) is specified, print the offset of the matching part itself. -H, - -with-filename Print the file name for each match. This is the default when there is more than one file to search. This is a GNU extension. -h, - -no-filename Suppress the prefixing of file names on output. This is the default when there is only one file (or only standard input) to search. *- -label=*/LABEL/ Display input actually coming from standard input as input coming from file LABEL. This can be useful for commands that transform a file’s contents before searching, e.g., gzip -cd foo.gz | grep - -label=foo -H ’some pattern’. See also the -H option. -n, - -line-number Prefix each line of output with the 1-based line number within its input file. -T, - -initial-tab Make sure that the first character of actual line content lies on a tab stop, so that the alignment of tabs looks normal. This is useful with options that prefix their output to the actual content: -H,*-n*, and -b. In order to improve the probability that lines from a single file will all start at the same column, this also causes the line number and byte offset (if present) to be printed in a minimum size field width. -Z, - -null Output a zero byte (the ASCII NUL character) instead of the character that normally follows a file name. For example, grep -lZ outputs a zero byte after each file name instead of the usual newline. This option makes the output unambiguous, even in the presence of file names containing unusual characters like newlines. This option can be used with commands like find -print0, perl -0, sort -z, and xargs -0 to process arbitrary file names, even those that contain newline characters. ### Context Line Control -A*/ NUM/, - -after-context=*/NUM/ Print NUM lines of trailing context after matching lines. Places a line containing a group separator (- -) between contiguous groups of matches. With the -o or - -only-matching option, this has no effect and a warning is given. -B*/ NUM/, - -before-context=*/NUM/ Print NUM lines of leading context before matching lines. Places a line containing a group separator (- -) between contiguous groups of matches. With the -o or - -only-matching option, this has no effect and a warning is given. -C*/ NUM/, -NUM, - -context=*/NUM/ Print NUM lines of output context. Places a line containing a group separator (- -) between contiguous groups of matches. With the -o or - -only-matching option, this has no effect and a warning is given. *- -group-separator=*/SEP/ When -A, -B, or -C are in use, print SEP instead of - - between groups of lines. - -no-group-separator When -A, -B, or -C are in use, do not print a separator between groups of lines. ### File and Directory Selection -a, - -text Process a binary file as if it were text; this is equivalent to the - -binary-files=text option. *- -binary-files=*/TYPE/ If a file’s data or metadata indicate that the file contains binary data, assume that the file is of type TYPE. Non-text bytes indicate binary data; these are either output bytes that are improperly encoded for the current locale, or null input bytes when the -z option is not given. By default, TYPE is binary, and grep suppresses output after null input binary data is discovered, and suppresses output lines that contain improperly encoded data. When some output is suppressed, grep follows any output with a one-line message saying that a binary file matches. If TYPE is without-match, when grep discovers null input binary data it assumes that the rest of the file does not match; this is equivalent to the -I option. If TYPE is text, grep processes a binary file as if it were text; this is equivalent to the -a option. When type is binary, grep may treat non-text bytes as line terminators even without the -z option. This means choosing binary versus text can affect whether a pattern matches a file. For example, when type is binary the pattern q might match q immediately followed by a null byte, even though this is not matched when type is text. Conversely, when type is binary the pattern . (period) might not match a null byte. Warning: The -a option might output binary garbage, which can have nasty side effects if the output is a terminal and if the terminal driver interprets some of it as commands. On the other hand, when reading files whose text encodings are unknown, it can be helpful to use -a or to set LC_ALL=’C’ in the environment, in order to find more matches even if the matches are unsafe for direct display. -D*/ ACTION/, - -devices=*/ACTION/ If an input file is a device, FIFO or socket, use ACTION to process it. By default, ACTION is read, which means that devices are read just as if they were ordinary files. If ACTION is skip, devices are silently skipped. -d*/ ACTION/, - -directories=*/ACTION/ If an input file is a directory, use ACTION to process it. By default, ACTION is read, i.e., read directories just as if they were ordinary files. If ACTION is skip, silently skip directories. If ACTION is recurse, read all files under each directory, recursively, following symbolic links only if they are on the command line. This is equivalent to the -r option. *- -exclude=*/GLOB/ Skip any command-line file with a name suffix that matches the pattern GLOB, using wildcard matching; a name suffix is either the whole name, or a trailing part that starts with a non-slash character immediately after a slash (/) in the name. When searching recursively, skip any subfile whose base name matches GLOB; the base name is the part after the last slash. A pattern can use *, ?, and [. . .*]* as wildcards, and \ to quote a wildcard or backslash character literally. *- -exclude-from=*/FILE/ Skip files whose base name matches any of the file-name globs read from FILE (using wildcard matching as described under - -exclude). *- -exclude-dir=*/GLOB/ Skip any command-line directory with a name suffix that matches the pattern GLOB. When searching recursively, skip any subdirectory whose base name matches GLOB. Ignore any redundant trailing slashes in GLOB. -I Process a binary file as if it did not contain matching data; this is equivalent to the - -binary-files=without-match option. *- -include=*/GLOB/ Search only files whose base name matches GLOB (using wildcard matching as described under - -exclude). If contradictory - -include and - -exclude options are given, the last matching one wins. If no - -include or - -exclude options match, a file is included unless the first such option is - -include. -r, - -recursive Read all files under each directory, recursively, following symbolic links only if they are on the command line. Note that if no file operand is given, B<grep> searches the working directory. This is equivalent to the -d recurse option. -R, - -dereference-recursive ### Other Options - -line-buffered Use line buffering on output. This can cause a performance penalty. -U, - -binary ### The Backslash Character and Special Expressions The symbols \< and \> respectively match the empty string at the beginning and end of a word. The symbol \b matches the empty string at the edge of a word, and \B matches the empty string provided it’s not at the edge of a word. The symbol \w is a synonym for [_[:alnum:]] and \W is a synonym for [^_[:alnum:]]. ### Repetition A regular expression may be followed by one of several repetition operators: ? The preceding item is optional and matched at most once. * The preceding item will be matched zero or more times. + The preceding item will be matched one or more times. {*/n/}* The preceding item is matched exactly n times. {*/n/,}* The preceding item is matched n or more times. {,*/m/}* The preceding item is matched at most m times. This is a GNU extension. {*/n/,*/m/*}* The preceding item is matched at least n times, but not more than m times. ### Concatenation Two regular expressions may be concatenated; the resulting regular expression matches any string formed by concatenating two substrings that respectively match the concatenated expressions. ### Alternation Two regular expressions may be joined by the infix operator |; the resulting regular expression matches any string matching either alternate expression. ### Precedence Repetition takes precedence over concatenation, which in turn takes precedence over alternation. A whole expression may be enclosed in parentheses to override these precedence rules and form a subexpression. ### Back-references and Subexpressions The back-reference *\*/n/ , where n is a single digit, matches the substring previously matched by the /n/th parenthesized subexpression of the regular expression. ### Basic vs Extended Regular Expressions In basic regular expressions the meta-characters ?, +, {, |, (, and ) lose their special meaning; instead use the backslashed versions \?, \+, \{, \|, $$, and$$. ## EXIT STATUS Normally the exit status is 0 if a line is selected, 1 if no lines were selected, and 2 if an error occurred. However, if the -q or - -quiet or - -silent is used and a line is selected, the exit status is 0 even if an error occurred. ## ENVIRONMENT The behavior of grep is affected by the following environment variables. The locale for category LC_*/foo/ is specified by examining the three environment variables *LC_ALL, LC_/foo/, LANG, in that order. The first of these variables that is set specifies the locale. For example, if LC_ALL is not set, but LC_MESSAGES is set to pt_BR, then the Brazilian Portuguese locale is used for the LC_MESSAGES category. The C locale is used if none of these environment variables are set, if the locale catalog is not installed, or if grep was not compiled with national language support (NLS). The shell command locale -a lists locales that are currently available. GREP_COLOR This variable specifies the color used to highlight matched (non-empty) text. It is deprecated in favor of GREP_COLORS, but still supported. The mt, ms, and mc capabilities of GREP_COLORS have priority over it. It can only specify the color used to highlight the matching non-empty text in any matching line (a selected line when the -v command-line option is omitted, or a context line when -v is specified). The default is 01;31, which means a bold red foreground text on the terminal’s default background. GREP_COLORS Specifies the colors and other attributes used to highlight various parts of the output. Its value is a colon-separated list of capabilities that defaults to ms=01;31:mc=01;31:sl=:cx=:fn=35:ln=32:bn=32:se=36 with the rv and ne boolean capabilities omitted (i.e., false). Supported capabilities are as follows. sl= SGR substring for whole selected lines (i.e., matching lines when the -v command-line option is omitted, or non-matching lines when -v is specified). If however the boolean rv capability and the -v command-line option are both specified, it applies to context matching lines instead. The default is empty (i.e., the terminal’s default color pair). cx= SGR substring for whole context lines (i.e., non-matching lines when the -v command-line option is omitted, or matching lines when -v is specified). If however the boolean rv capability and the -v command-line option are both specified, it applies to selected non-matching lines instead. The default is empty (i.e., the terminal’s default color pair). rv Boolean value that reverses (swaps) the meanings of the sl= and cx= capabilities when the -v command-line option is specified. The default is false (i.e., the capability is omitted). mt=01;31 SGR substring for matching non-empty text in any matching line (i.e., a selected line when the -v command-line option is omitted, or a context line when -v is specified). Setting this is equivalent to setting both ms= and mc= at once to the same value. The default is a bold red text foreground over the current line background. ms=01;31 SGR substring for matching non-empty text in a selected line. (This is only used when the -v command-line option is omitted.) The effect of the sl= (or cx= if rv) capability remains active when this kicks in. The default is a bold red text foreground over the current line background. mc=01;31 SGR substring for matching non-empty text in a context line. (This is only used when the -v command-line option is specified.) The effect of the cx= (or sl= if rv) capability remains active when this kicks in. The default is a bold red text foreground over the current line background. fn=35 SGR substring for file names prefixing any content line. The default is a magenta text foreground over the terminal’s default background. ln=32 SGR substring for line numbers prefixing any content line. The default is a green text foreground over the terminal’s default background. bn=32 SGR substring for byte offsets prefixing any content line. The default is a green text foreground over the terminal’s default background. se=36 SGR substring for separators that are inserted between selected line fields (:), between context line fields, (-), and between groups of adjacent lines when nonzero context is specified (- -). The default is a cyan text foreground over the terminal’s default background. ne Boolean value that prevents clearing to the end of line using Erase in Line (EL) to Right (\33[K) each time a colorized item ends. This is needed on terminals on which EL is not supported. It is otherwise useful on terminals for which the back_color_erase (bce) boolean terminfo capability does not apply, when the chosen highlight colors do not affect the background, or when EL is too slow or causes too much flicker. The default is false (i.e., the capability is omitted). Note that boolean capabilities have no =. . . part. They are omitted (i.e., false) by default and become true when specified. See the Select Graphic Rendition (SGR) section in the documentation of the text terminal that is used for permitted values and their meaning as character attributes. These substring values are integers in decimal representation and can be concatenated with semicolons. grep takes care of assembling the result into a complete SGR sequence (\33[. . .*m*). Common values to concatenate include 1 for bold, 4 for underline, 5 for blink, 7 for inverse, 39 for default foreground color, 30 to 37 for foreground colors, 90 to 97 for 16-color mode foreground colors, 38;5;0 to 38;5;255 for 88-color and 256-color modes foreground colors, 49 for default background color, 40 to 47 for background colors, 100 to 107 for 16-color mode background colors, and 48;5;0 to 48;5;255 for 88-color and 256-color modes background colors. LC_ALL, LC_COLLATE, LANG These variables specify the locale for the LC_COLLATE category, which determines the collating sequence used to interpret range expressions like [a-z]. LC_ALL, LC_CTYPE, LANG These variables specify the locale for the LC_CTYPE category, which determines the type of characters, e.g., which characters are whitespace. This category also determines the character encoding, that is, whether text is encoded in UTF-8, ASCII, or some other encoding. In the C or POSIX locale, all characters are encoded as a single byte and every byte is a valid character. LC_ALL, LC_MESSAGES, LANG These variables specify the locale for the LC_MESSAGES category, which determines the language that grep uses for messages. The default C locale uses American English messages. POSIXLY_CORRECT If set, grep behaves as POSIX requires; otherwise, grep behaves more like other GNU programs. POSIX requires that options that follow file names must be treated as file names; by default, such options are permuted to the front of the operand list and are treated as options. Also, POSIX requires that unrecognized options be diagnosed as “illegal”, but since they are not really against the law the default is to diagnose them as “invalid”. POSIXLY_CORRECT also disables */N/*_GNU_nonoption_argv_flags, described below. */N/*_GNU_nonoption_argv_flags (Here N is grep’s numeric process ID.) If the /i/th character of this environment variable’s value is 1, do not consider the /i/th operand of grep to be an option, even if it appears to be one. A shell can put this variable in the environment for each command it runs, specifying which operands are the results of file name wildcard expansion and therefore should not be treated as options. This behavior is available only with the GNU C library, and only when POSIXLY_CORRECT is not set. ## NOTES This man page is maintained only fitfully; the full documentation is often more up-to-date. Copyright 1998-2000, 2002, 2005-2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ## BUGS ### Reporting Bugs Email bug reports to the bug-reporting address. An email archive and a bug tracker are available. ### Known Bugs Large repetition counts in the {*/n/,*/m/*}* construct may cause grep to use lots of memory. In addition, certain other obscure regular expressions require exponential time and space, and may cause grep to run out of memory. Back-references are very slow, and may require exponential time. ## EXAMPLE The following example outputs the location and contents of any line containing “f” and ending in “.c”, within all files in the current directory whose names contain “g” and end in “.h”. The -n option outputs line numbers, the -- argument treats expansions of “*g*.h” starting with “-” as file names not options, and the empty file /dev/null causes file names to be output even if only one file name happens to be of the form “*g*.h”. $grep -n -- 'f.*\.c$' *g*.h /dev/null argmatch.h:1:/* definitions and prototypes for argmatch.c The only line that matches is line 1 of argmatch.h. Note that the regular expression syntax used in the pattern differs from the globbing syntax that the shell uses to match file names. ### Regular Manual Pages *awk*(1), *cmp*(1), *diff*(1), *find*(1), *perl*(1), *sed*(1), *sort*(1), *xargs*(1), *read*(2), *pcre*(3), *pcresyntax*(3), *pcrepattern*(3), *terminfo*(5), *glob*(7), *regex*(7) ### Full Documentation A complete manual is available. If the info and grep programs are properly installed at your site, the command info grep
proofpile-shard-0030-187
{ "provenance": "003.jsonl.gz:188" }
What is the origin of the use of “g” for a Riemannian metric? I am asking about the reason for the use of this letter, if known, as well as the initial occasion of its use. Ideas that have been suggested concerning the former include: • That it stands for geometry or Geometrie • That it stands for some other German word • That it is in homage to Gauss • That it refers to Gravitation • That it refers to the Gram matrix As for the latter, I'm guessing it originates somewhere between Riemann and Einstein. You are guessing correctly. Riemann did not use $g$ for the metric tensor, he writes things like $ds^2$ or $\sum dx^2$ instead, see his 1854 lecture "On the Hypotheses which lie at the Bases of Geometry" (1854).
proofpile-shard-0030-188
{ "provenance": "003.jsonl.gz:189" }
probsoln v3.04: creating problem sheets optionally with solutions 2012-08-23 1 Introduction The probsoln package is designed for teachers or lecturers who want to create problem sheets for their students. This package was designed with mathematics problems in mind, but can be used for other subjects as well. The idea is to create a file containing a large number of problems with their solutions which can be read in by LATEX, and then select a number of problems to typeset. This means that once the database has been set up, each year you can easily create a new problem sheet that is sufficiently different from the previous year, thus preventing the temptation of current students seeking out the previous year’s students, and checking out their answers. There is also an option that can be passed to the package to determine whether or not the solutions should be printed. In this way, one file can either produce the student’s version or the teacher’s version. Top 2 Package Options The following options may be passed to this package: draft Display the label and dataset name when a problem is used final Don’t display label and dataset name when a problem is used usedefaultargs Make \thisproblem use the default arguments supplied in the problem definition. nousedefaultargs Make \thisproblem prompt for problem arguments (default). Top 3 Verbatim As from version 3.02, problems and solutions may contain verbatim text, but you must use the fragile (or fragile=true) option for the associated environments. Alternatively, if most of your problems contain verbatim, you can globally set this option using: \setkeys{probsoln}{fragile} You can switch off this option using fragile=false. The fragile option writes information to a temporary file. This defaults to \jobname.vrb but the name may be changed. The extension (.vrb) is given by: \ProbSolnFragileExt  \ProbSolnFragileExt The base name (\jobname) is given by: \ProbSolnFragileFile  \ProbSolnFragileFile Top 4 Showing and Hiding Solutions In addition to the answers and noanswers package options, it is also possible to show or suppress the solutions using and respectively. The boolean variable showanswers determines whether the answers should be displayed. You can use this value with the ifthen package to specify different text depending on whether the solutions should be displayed. For example: For longer passages, you can use the environments onlyproblem  \begin{onlyproblem}[option] and onlysolution  \begin{onlysolution}[option] For example: \begin{onlyproblem}% What is the derivative of $f(x) = x^2$? \end{onlyproblem}% \begin{onlysolution}% $f’(x) = 2x$ \end{onlysolution} The above will only display the question if showanswers is false and will only display the solution if showanswers is true. If you want the question to appear in the answer sheet as well as the solution, then don’t put the question in the onlyproblem environment: What is the derivative of $f(x) = x^2$? \begin{onlysolution}% Solution: $f’(x) = 2x$ \end{onlysolution} If you want to include verbatim text in the body of onlyproblem or onlysolution, you need to specify fragile in the optional argument of the environment. (See §3 Verbatim for further details.) If you use onlysolution within the defproblem environment, the problem will be tagged as having a solution and will be added to the list used by \foreachsolution. The optional argument of onlysolution (and onlyproblem) is inherited from the parent defproblem setting. Top 5 General Formatting Commands The commands and environments described in this section are provided to assist formatting problems and their solutions. solution  \begin{solution}text\end{solution} By default, this is equivalent to \par\noindent\textbf{\solutionname}: text where \solutionname \solutionname defaults to “Solution”. Note that you must place the solution environment inside the onlysolution environment or between \ifshowanswers\fi to ensure that it is suppressed when the solutions are not wanted. (See §4 Showing and Hiding Solutions.) Note that the probsoln package will only define the solution environment if it is not already defined. textenum  \begin{textenum}\end{textenum} The textenum environment is like the enumerate environment but is in-line. It uses the same counter that the enumerate environment would use at that level so the question can be compact but the answer can use enumerate instead. For example: \begin{onlyproblem}% Differentiate the following: \begin{textenum} \item $f(x)=2^x$; \item $f(x)=\cot(x)$ \end{textenum} \end{onlyproblem} \begin{onlysolution} \begin{enumerate} \item \begin{align*} f(x) &= 2^x = \exp(\ln(x^2)) =\exp(2\ln(x))\\ f’(x) &= \exp(2\ln(x))\times \frac{2}{x}\\ &= f(x)\frac{2}{x} \end{align*} \item \begin{align*} f(x) &= \cot(x) = (\tan(x))^{-2}\\ f’(x) &= -(\tan(x))^{-2}\times\sec^2(x)\\ &=-\csc^2x \end{align*} \end{enumerate} \end{onlysolution} In this example, the items in the question are brief, so an enumerate environment would result in a lot of unnecessary white space, but the answers require more space, so an enumerate environment is more appropriate. Since the textenum environment uses the same counters as the enumerate environment, the question and answer sheets use consistent labelling. Note that there are other packages available on CTAN that you can use to create in-line lists. Check the TeX Catalogue for further details. \correctitem \incorrectitem \correctitem \incorrectitem You can use the commands \correctitem and \incorrectitem in place of \item. If the solutions are suppressed, these commands behave in the same way as \item, otherwise they format the item label using one of the commands: \correctitemformat \incorrectitemformat \correctitemformat{label} \incorrectitemformat{label} For example: Under which of the following functions does $S=\{a_1,a_2\}$ become a probability space? \begin{enumerate} \incorrectitem $P(a_1)=\frac{1}{3}$, $P(a_2)=\frac{1}{2}$ \correctitem $P(a_1)=\frac{3}{4}$, $P(a_2)=\frac{1}{4}$ \correctitem $P(a_1)=1$, $P(a_2)=0$ \incorrectitem $P(a_1)=\frac{5}{4}$, $P(a_2)=-\frac{1}{4}$ \end{enumerate} The default definition of \correctitemformat puts a frame around the label. Top 6 Defining a Problem It is possible to construct a problem sheet with solutions using the commands described in the previous sections, however it is also possible to define a set of problems for later use. In this way you can create an external file containing many problems some or all of which can be loaded and used in a document. The probsoln package has a default data set labelled “default” in which you can store problems. Alternatively, you can create multiple data sets. You can then iterate through each problem in a problem set. You can use a previously defined problem more than once, which means that by judicious use of onlyproblem, onlysolution or the showanswers boolean variable in conjunction with \showanswers and \hideanswers, you can print the solutions in a different location to the questions (for example in an appendix). defproblem  \begin{defproblem}[n][default args]{label}[option] definition \end{defproblem} This defines the problem whose label is given by label. The label must be unique for a given data set and should not contain active characters or a comma. (Active characters include the special characters such as $and &, but some packages may make other symbols active, such as the colon (:) character. For example, the ngerman and babel packages make certain punctuation active. Check the relevant package documentation for details.) The final optional argument optionmay be fragile to indicate that the problem contains verbatim text. Any occurrences of onlyproblem or onlysolution contained within defproblem are inherited from defproblem. (See §3 Verbatim for further details.) If defproblem occurs in the document or is included via \input or \include, then the problem will be added to the default data set. If defproblem occurs in an external file that is loaded using one of the commands defined in §8 Loading Problems From External Files then the problem will be added to the specified data set. The contents of the defproblem environment should be the text that defines the problem. This may include any of the commands defined in §4 Showing and Hiding Solutions and §5 General Formatting Commands. The problem may optionally take narguments (where nis from 0 to 9). The arguments can be referenced in the definition via #1,…,#9. If nis omitted then the problem doesn’t take any arguments. The following example defines a problem with one argument: \begin{defproblem}[1]{diffsin} Differentiate$f(x)=\sin(#1x)$. \begin{onlysolution}% \begin{solution}$f’(x) = #1\cos(#1x)$\end{solution} \end{onlysolution} \end{defproblem} The second optional argument default argssupplies default problem arguments that will automatically be used within \thisproblem when used in \foreachproblem in conjunction with the package option usedefaultargs. (See §9 Iterating Through Datasets.) For example: \begin{defproblem}[1][{2}]{diffsin} Differentiate$f(x)=\sin(#1x)$. \begin{onlysolution}% \begin{solution}$f’(x) = #1\cos(#1x)$\end{solution} \end{onlysolution} \end{defproblem} \newproblem \newproblem[n][default args]{label}{problem}{solution} This is a shortcut command for: \begin{defproblem}[n][default args]{label}% problem% \begin{onlysolution}% \begin{solution}% solution% \end{solution}% \end{onlysolution}% \end{defproblem} For example: \newproblem[1]{diffsin}{% $$f(x) = \sin(#1x)$$ }% {% $$f’(x) = #1\cos(#1x)$$ } is equivalent to \begin{defproblem}[1]{diffcos}% $$f(x) = \cos(#1x)$$ \begin{onlysolution}% \begin{solution}% $$f’(x) = -#1\sin(#1x)$$ \end{solution}% \end{onlysolution}% \end{defproblem} (In this example, the argument will need to be a positive number to avoid a double minus in the answer. If you want to perform floating point arithmetic on the arguments, then try the fp or pgfmath packages.) Alternatively, if you want to supply default arguments to use when iterating through problems with \foreachproblem: \newproblem[1][{3}]{diffsin}{% $$f(x) = \sin(#1x)$$ }% {% $$f’(x) = #1\cos(#1x)$$ } \newproblem* \newproblem*[n][default args]{label}{definition} This is a shortcut for: \begin{defproblem}[n][default args]{label}% definition% \end{defproblem} Note that you can’t use verbatim text with \newproblem or \newproblem*. Use the defproblem environment instead with the fragile option. Top 7 Using a Problem Once you have defined a problem using defproblem or \newproblem (see §6 Defining a Problem), you can later display the problem using: \useproblem \useproblem[data set]{label}{arg1}…{argN} where data setis the name of the data set that contains the problem (the default data set is used if omitted), labelis the label identifying the required problem and arg1, …, argNare the arguments to pass to the problem, if the problem was defined to have arguments (where N is the number of arguments specified when the problem was defined). For example, in the previous section the problem diffcos was defined to have one argument, so it can be used as follows: \useproblem{diffcos}{3} This will be equivalent to: $$f(x) = \cos(3x)$$ \begin{onlysolution}% \begin{solution}% $$f’(x) = -3\sin(3x)$$ \end{solution}% \end{onlysolution}% Top 8 Loading Problems From External Files You can store all your problem definitions (see §6 Defining a Problem) in an external file. These problems can all be appended to the default data set by including the file via \input or they can be appended to other data sets using one of the commands described below. Once you have loaded all the required problems, you can iterate through the data sets using the commands described in §9 Iterating Through Datasets. Note that the commands below will create a new data set, if the named data set doesn’t exist. \loadallproblems \loadallproblems[data set]{filename} This will load all problems defined in filenameand append them to the specified data set, in the order in which they are defined in the file. If data set is omitted, the default data set will be used. If data setdoesn’t exist, it will be created. \loadselectedproblems \loadselectedproblems[data set]{labels}{filename} This is like \loadallproblems, but only those problems whose label is listed in the comma-separated list labelsare loaded. For example, if I have some problems defined in the file derivatives.tex, then \loadselectedproblems{diffsin,diffcos}{derivatives} will only load the problems whose labels are diffsin and diffcos, respectively. All the other problems in the file will remain undefined. \loadexceptproblems \loadexceptproblems[data set]{exception list}{filename} This is the reverse of \loadselectedproblems. This loads all problems except those whose labels are listed in exception list. \loadrandomproblems \loadrandomproblems[data set]{n}{filename} This randomly loads nproblems from filenameand adds them to the given data set. If data setis omitted, the default data set is assumed. Note that the problems will be added to the data set in a random order, not in the order in which they were defined. There must be at least nproblems defined in filename. \loadrandomexcept \loadrandomexcept[data set]{n}{filename}{exception list} This is similar to \loadrandomproblems except that it won’t load those problems whose labels are listed in exception list. If you want to automatically exclude problems included in previous documents, see §8.1 Randomly Selecting Problems Not Selected in Previous Documents. Note that the random number generator has been modified in version 3.01 in order to fix a bug. If you want to ensure that your random numbers are compatible with earlier versions, you can switch to the old generator using \PSNuseoldrandom \PSNuseoldrandom It is generally not a good idea to place anything in filenamethat is not inside the body of defproblem or in the arguments to \newproblem or \newproblem*. All the commands in this section input the external file within a local scope, so command definitions would need to be made global to have any effect. In addition, \loadrandomproblems has to load each file twice, which means that anything outside a problem definition will be parsed twice. Top 8.1 Randomly Selecting Problems Not Selected in Previous Documents Suppose you have a large set of questions that you want to randomly select for assignments and exams. The chances are, you don’t want to include questions that have been previously set for, say, the last three years. That is, you don’t want to select questions the students may already have seen. As from version 3.03, you can now do this. The probsoln package defaults to the UK academic year, which starts in September. If this isn’t appropriate, you can change it using: \SetStartMonth \SetStartMonth{n} where nis the number of the month. (1 = January, 2 = February, etc.) The start year is the calender year in effect when the academic year started. For example, if this is the academic year 2011/12, then the start year is 2011. This is automatically set to the start of the current academic year. It is also updated when \SetStartMonth is used.1 If you want to set it to a specific year, you can use: \SetStartYear \SetStartYear{year} For example: \SetStartYear{2008} indicates the academic year 2008/9. There are two files concerned with previously used labels. They are: The previously used labels file This keeps track of all problems used in previous years, as well as problems used by other documents that have this as their previously used labels file, and it contains the problem labels from the last run of the current document. The current used labels file This defaults to \jobname.prb, but the name can be changed using: \SetUsedFileName \SetUsedFileName{name} This file keeps track of all the labels used in the current document from the previous LATEX run. Note that if you want to delete this file, first clear it using \ClearUsedFile \ClearUsedFile{file} in place of \ExcludePreviousFile{file}, described below. The argument fileis the previously used labels file described above. \ClearUsedFile will remove all labels in the current used labels file from the previously used labels file and clear the current used labels file. Once this file is empty, it may then be deleted. Before loading randomly selected problems, first specify the previously used labels file with the command: \ExcludePreviousFile \ExcludePreviousFile[number of years]{file name} where file nameis the name of the previously used file. The optional argument number of yearsspecifies the year cut-off. This defaults to 3, which means that only those labels used this year or the previous 2 years will be kept. Any problems used before then may be reused. Suppose I’m lecturing a first year undergraduate mathematics course (designated, say, mth101). I want to set assignments on each topic and an exam at the end of the year (as well as a resit or second sitting paper). I’ve got databases with problems for each topic, but the first and second sitting exams mustn’t include any of the problems used in the assignments or any problems used in assignments or exams for the previous two academic years. I’m going to arrange my directory structure as follows: • mth101/ • assignment1/ (differentiation) • assignment1.tex • assignment2/ (probability spaces) • assignment2.tex • assignment3/ (linear algebra) • assignment3.tex • exams/ • exam.tex (first sitting) • resit.tex (second sitting) • databases/ • differentiation.tex • probabilityspaces.tex • linearalgebra.tex • previouslabels.tex (created by probsoln) Top 9 Iterating Through Datasets Once you have defined all your problems for a given data set, you can use an individual problem with \useproblem (see §7 Using a Problem) but it is more likely that you will want to iterate through all the problems so that you don’t need to remember the labels of all the problems you have defined. \foreachproblem \foreachproblem[data set]{body} This does bodyfor each problem in the given data set. If data setis omitted, the default data set is used. Within bodyyou can use \thisproblem \thisproblem to use the current problem and \thisproblemlabel \thisproblemlabel to access the current label. If the problem requires arguments, and no default arguments were supplied in the problem definition or the package option usedefaultargs was not used, then you will be prompted for arguments, so if you want to use this approach you will need to use LATEX in interactive mode. If you do provide arguments, they will be stored in the event that you need to iterate through the data set again. The arguments will be included in \thisproblem, so you only need to use \thisproblem without having to specify \useproblem. For example, to iterate through all problems in the default data set: \begin{enumerate} \foreachproblem{\item\thisproblem} \end{enumerate} \foreachsolution \foreachsolution[data set]{body} This is equivalent to \foreachsolution, but only iterates through problems that contain the onlysolution environment. Note that you still need to use \showanswers or the answers package option for the contents of the onlysolution environment to appear. \foreachdataset \foreachdataset{cmd}{body} This does bodyfor each of the defined data sets. Within body, cmdwill be set to the name of the current data set. For example, to display all problems in all data sets: \begin{enumerate} \foreachdataset{\thisdataset}{% \foreachproblem[\thisdataset]{\item\thisproblem}} \end{enumerate} Suppose I have two external files called derivatives.tex and probspaces.tex which define problems using both onlyproblem and onlysolution for example: \begin{defproblem}{cosxsqsinx}% \begin{onlyproblem}%$y = \cos(x^2)\sin x\$.% \end{onlyproblem}% \begin{onlysolution}% $\frac{dy}{dx} = -\sin(x^2)2x\sin x + \cos(x^2)\cos x$ \end{onlysolution}% \end{defproblem} I can write a document that creates two data sets, one for the derivative problems and one for the problems about probability spaces. I can then use \hideanswers and iterate through the require data set to produce the problems. Later, I can use \showanswers and iterate over all problems defined in both data sets to produce the chapter containing all the answers. When displaying the questions, I have taken advantage of the fact that I can cross-reference items within an enumerate environment, and redefined \theenumi to label the questions according to the chapter. The cross-reference label is constructed from the problem label and is referenced in the answer section to ensure that the answers have the same label as the questions. \documentclass{report} \usepackage{probsoln} \begin{document} \chapter{Differentiation} randomly select 25 problems from derivatives.tex and add to the data set called ’deriv’ Display the problems \renewcommand{\theenumi}{\thechapter.\arabic{enumi}} \begin{enumerate} \foreachproblem[deriv]{\item\label{prob:\thisproblemlabel}\thisproblem} \end{enumerate} You may need to change \theenumi back here \chapter{Probability Spaces} randomly select 25 problems from probspaces.tex and add to the data set called ’spaces’ Display the problems \renewcommand{\theenumi}{\thechapter.\arabic{enumi}} \begin{enumerate} \foreachproblem[spaces]{\item\label{prob:\thisproblemlabel}\thisproblem} \end{enumerate} You may need to change \theenumi back here \appendix \chapter{Solutions} \begin{itemize} \foreachdataset{\thisdataset}{% \foreachproblem[\thisdataset]{\item[\ref{prob:\thisproblemlabel}]\thisproblem} } \end{itemize} \end{document} Top 10 Random Number Generator This package provides a pseudo-random number generator that is used by \loadrandomproblems. As noted earlier the random number generator has been modified in version 3.01 in order to fix a bug. If you want to ensure that your random numbers are compatible with earlier versions, you can switch to the old generator using \PSNuseoldrandom  \PSNuseoldrandom \PSNrandseed  \PSNrandseed{n} This sets the seed to nwhich must be a non-zero integer. For example, to generate a different set of random numbers every time you LATEX your document,2 put the following in your preamble: \PSNrandseed{\time} or to generate a different set of random numbers every year you LATEX your document: \PSNrandseed{\year} \PSNgetrandseed  \PSNgetrandseed{register} This stores the current seed in the count register specified by register. For example: \newcount\myseed \PSNgetrandseed{\myseed} \PSNrandom  \PSNrandom{register}{n} Generates a random integer from 1 to nand stores in the count register specified by register. For example, the following generates an integer from 1 to 10 and stores it in the register \myreg: \newcount\myreg \PSNrandom{\myreg}{10} \random  \random{counter}{min}{max} Generates a random integer from minto maxand stores in the given counter. For example, the following generates a random number between 3 and 8 (inclusive) and stores it in the counter myrand. \newcounter{myrand} \random{myrand}{3}{8} \doforrandN  \doforrandN{n}{cmd}{list}{text} Randomly selects nvalues from the comma-separated list given by listand iterates through this subset. On each iteration it sets cmd to the current value and does text. For example, the following will load a randomly selected problem from two of the listed files (where file1.tex, file2.tex and file3.tex are files containing at least one problem): \doforrandN{2}{\thisfile}{file1,file2,file3}{% Top 11 Compatibility With Versions Prior to 3.0 Version 3.0 of the probsoln package completely changed the structure of the package, but the commands described in this section have been provided to maintain compatibility with earlier versions. The only problems that are likely to occur are those where commands are contained within groups. This will effect any commands that are contained in external files that are outside of the arguments to \newproblem and \newproblem*. However, since the external files had to be parsed twice in order to load the problems, this shouldn’t be an issue as adding anything other than problem definitions in those files would be problematic anyway. The other likely difference is where the random generator is used in a group. This includes commands such as \selectrandomly. For example, if your document contained something like: \begin{enumerate} \selectrandomly{file1}{8} \item Solve the following: \begin{enumerate} \selectrandomly{file2}{4} \end{enumerate} \selectrandomly{file3}{2} \end{enumerate} Then using versions prior to v3.0 will produce a different set of random numbers since the second \selectrandomly is in a different level of grouping. If you want to ensure that the document produces exactly the same random set with the new version as with the old version, you will need to get and set the random number seed. For example, the above would need to be modified so that it becomes: \begin{enumerate} \selectrandomly{file1}{8} \item Solve the following: \newcount\oldseed \PSNgetrandseed{\oldseed} \begin{enumerate} \selectrandomly{file2}{4} \end{enumerate} \PSNrandseed{\oldseed} \selectrandomly{file3}{2} \end{enumerate} \selectrandomly  \selectrandomly{filename}{n} This is now equivalent to: {\loadrandomproblems[filename]{n}{filename}}% \foreachproblem[filename]{\PSNitem\thisproblem\endPSNitem} \selectallproblems  \selectallproblems{filename} This is now equivalent to: {\loadallproblems[filename]{filename}}% \foreachproblem[filename]{\PSNitem\thisproblem\endPSNitem} Note that in both the above cases, a new data set is created with the same name as the file name. Top Index B babel package  1 C \ClearUsedFile  2 \correctitem  3 \correctitemformat  4 D defproblem (environment)  5, 6, 7, 8, 9, 10, 11, 12 \doforrandN  13 E enumerate (environment)  14, 15, 16 environments: defproblem  17, 18, 19, 20, 21, 22, 23, 24 enumerate  25, 26, 27 onlyproblem  28, 29, 30, 31, 32, 33, 34 onlysolution  35, 36, 37, 38, 39, 40, 41 solution  42, 43, 44 textenum  45 \ExcludePreviousFile  46 F \foreachdataset  47 \foreachproblem  48, 49, 50 \foreachsolution  51, 52 fp package  53 fragile  54 ifthen package  60 \include  61 \incorrectitem  62 \incorrectitemformat  63 \input  64, 65 \item  66 N \newproblem  73, 74, 75, 76 \newproblem*  77, 78, 79 ngerman package  80 O onlyproblem (environment)  81, 82, 83, 84, 85, 86, 87 onlysolution (environment)  88, 89, 90, 91, 92, 93, 94 P package options: draft  98 final  99 nousedefaultargs  102 usedefaultargs  103, 104, 105 pgfmath package  106 probsoln package  107, 108, 109 \ProbSolnFragileExt  110 \ProbSolnFragileFile  111 \PSNgetrandseed  112 \PSNrandom  113 \PSNrandseed  114 \PSNuseoldrandom  115, 116 R \random  117 S \selectallproblems  118 \selectrandomly  119, 120 \SetStartMonth  121 \SetStartYear  122 \SetUsedFileName  123 showanswers boolean variable  128, 129, 130, 131 solution (environment)  132, 133, 134 \solutionname  135 T textenum (environment)  136 \theenumi  137 \thisproblem  138, 139, 140, 141 \thisproblemlabel  142 U \useproblem  143, 144, 145 1So don’t use \SetStartMonth after \SetStartYear. 2assuming you leave at least a minute between runs.
proofpile-shard-0030-189
{ "provenance": "003.jsonl.gz:190" }
## LaTeX forum ⇒ Math & Science ⇒ Beginner : i dont know the errors in my Latex document Topic is solved Information and discussion about LaTeX's math and science related features (e.g. formulas, graphs). hariekd Posts: 2 Joined: Sat Nov 28, 2015 12:55 pm ### Beginner : i dont know the errors in my Latex document  Topic is solved I am a new latex user. I have prepared a maths proof for my students. Even i get the out put, the source file shows so many errors. pls indicate me about the errors. \documentclass{article}\usepackage{amssymb}\begin{document}\centerline{\sc \Large Standard IX - unit 9 Similar Triangles}\vspace{.5pc}\centerline{\it (Second Question from Page Number 133)}\vspace{2pc}\textbf{In a trianlge, a line is drawn parallel to one side and a small triangle is cut off. If the Parallel line drawn is through the mid-point of one side, then how much of the area of the original triangle is the area of the small triangle?}\\ Proof:- Consider $\triangle APQ$ and $\triangle ABC$\begin{enumerate}\item $\angle APQ =$ and $\angle ABC$ (Corresponding Angles)\item $\angle AQP =$ and $\angle ACB$ (Corresponding Angles)\end{enumerate}As the two angles of $\triangle APQ$ are equal to the two angles of $\triangle ABC$, these two triangles will be similar.\\ \therefore \triangle $APQ$ \sim \triangle $ABC$\\  As $P$ and $Q$ are the midpoints of $AB$ and $AC$,\\$AP=\frac{1}{2}AB$ and $AQ=\frac{1}{2}AC$\\ $\therefore \frac{AQ}{AC}=$ $\frac{AP}{AB}=$ $\frac{PQ}{BC}=$ $\frac{1}{2}$\begin{equation}ie, AP= \frac{1}{2}AB\end{equation} Consider $\triangle APR$ and $\triangle ABD$\begin{enumerate}\item $\angle APR =$ and $\angle ABD$ (Corresponding Angles)\item $\angle ARP =$ and $\angle ADB$ (Corresponding Angles)\end{enumerate}As the two angles of $\triangle APR$ are equal to the two angles of $\triangle ABD$, these two trianlges will be similar.\\ \therefore \triangle $APR$ \sim $\triangle ABD$\newline  $\therefore \frac{AR}{AD}$ = $\frac{AP}{AB}$= $\frac{1}{2}$ (We have proved $\frac{AP}{AB}$= $\frac{1}{2}$)\\\begin{equation}ie, AR=\frac{1}{2}AD\end{equation} \begin{align}Area of \triangle APQ & = \frac{1}{2} X PQ X AR \\& = \frac{1}{2} X \frac{1}{2}BC X \frac{1}{2}AD ($By using Equation 1 and 2$) \\ & = \frac{1}{4} X \frac{1}{2} X BC X AD \\& = \frac{1}{4} X Area of $\triangle ABC$ \\\end{align}\end{document} Tags: Stefan Kottwitz Posts: 9604 Joined: Mon Mar 10, 2008 9:44 pm Hi, welcome to the forum! Very good that you started with LaTeX. I recommend to read an introductory text. For example, my book LaTeX Beginner's Guide, or a free text such as LaTeX for Complete Novices. By the way, this weekend my publisher sells my two LaTeX ebooks, so also the LaTeX Cookbook, (all ebooks) with 50% discount (link) • There are problems with inline math mode in the text, that is, math formulas within normal text. Start with a $, later end with a$. A rule of thumb, symbols are in math mode too. So, for example write $\therefore \triangle APQ \sim \triangle ABC$ • Don't use within align, because this is already (displayed) math mode. • Load the amsmath package for extended math support. Here is the corrected error-free code, but some more things can be improved: \documentclass{article}\usepackage{amsmath}\usepackage{amssymb}\begin{document}\centerline{\sc \Large Standard IX - unit 9 Similar Triangles}\vspace{.5pc}\centerline{\it (Second Question from Page Number 133)}\vspace{2pc}\textbf{In a trianlge, a line is drawn parallel to one side and a small triangle is cut off. If the Parallel line drawn is through the mid-point of one side, then how much of the area of the original triangle is the area of the small triangle?}\\ Proof:- Consider\triangle APQ$and$\triangle ABC$\begin{enumerate}\item$\angle APQ =$and$\angle ABC$(Corresponding Angles)\item$\angle AQP =$and$\angle ACB$(Corresponding Angles)\end{enumerate}As the two angles of$\triangle APQ$are equal to the two angles of$\triangle ABC$, these two triangles will be similar.\\$\therefore \triangle APQ \sim \triangle ABC$\\ As$P$and$Q$are the midpoints of$AB$and$AC$,\\$AP=\frac{1}{2}AB$and$AQ=\frac{1}{2}AC$\\$\therefore \frac{AQ}{AC}=\frac{AP}{AB}=\frac{PQ}{BC}=\frac{1}{2}$\begin{equation}ie, AP= \frac{1}{2}AB\end{equation} Consider$\triangle APR$and$\triangle ABD$\begin{enumerate}\item$\angle APR =$and$\angle ABD$(Corresponding Angles)\item$\angle ARP =$and$\angle ADB$(Corresponding Angles)\end{enumerate}As the two angles of$\triangle APR$are equal to the two angles of$\triangle ABD$, these two trianlges will be similar.\\$\therefore \triangle APR \sim \triangle ABD$\newline$\therefore \frac{AR}{AD}$=$\frac{AP}{AB}$=$\frac{1}{2}$(We have proved$\frac{AP}{AB}$=$\frac{1}{2})\\\begin{equation}ie, AR=\frac{1}{2}AD\end{equation} \begin{align}Area of \triangle APQ& = \frac{1}{2} X PQ X AR \\& = \frac{1}{2} X \frac{1}{2}BC X \frac{1}{2}AD (By using Equation 1 and 2) \\& = \frac{1}{4} X \frac{1}{2} X BC X AD \\& = \frac{1}{4} X Area of \triangle ABC \\\end{align}\end{document} For example, don't end a text line by \\. This is a command just for ending lines in a table or a multi-line math formula. An empty line is sufficient as a paragraph break. It seems that you (mis)use \\ to get a space between paragraph. For this purpose, you could load the parskip package and remove the \\ in normal text. \documentclass{article}\usepackage{amsmath}\usepackage{amssymb}\usepackage{parskip}\begin{document}\centerline{\sc \Large Standard IX - unit 9 Similar Triangles}\vspace{.5pc}\centerline{\it (Second Question from Page Number 133)}\vspace{2pc}\textbf{In a triangle, a line is drawn parallel to one side and a small triangle is cut off. If the Parallel line drawn is through the mid-point of one side, then how much of the area of the original triangle is the area of the small triangle?} Proof:- Consider\triangle APQ$and$\triangle ABC$\begin{enumerate}\item$\angle APQ =$and$\angle ABC$(Corresponding Angles)\item$\angle AQP =$and$\angle ACB$(Corresponding Angles)\end{enumerate}As the two angles of$\triangle APQ$are equal to the two angles of$\triangle ABC$, these two triangles will be similar.$\therefore \triangle APQ \sim \triangle ABC$As$P$and$Q$are the midpoints of$AB$and$AC$,$AP=\frac{1}{2}AB$and$AQ=\frac{1}{2}AC\therefore \frac{AQ}{AC}=\frac{AP}{AB}=\frac{PQ}{BC}=\frac{1}{2}$\begin{equation}ie, AP= \frac{1}{2}AB\end{equation} Consider$\triangle APR$and$\triangle ABD$\begin{enumerate}\item$\angle APR =$and$\angle ABD$(Corresponding Angles)\item$\angle ARP =$and$\angle ADB$(Corresponding Angles)\end{enumerate}As the two angles of$\triangle APR$are equal to the two angles of$\triangle ABD$, these two trianlges will be similar.$\therefore \triangle APR \sim \triangle ABD$\newline$\therefore \frac{AR}{AD}$=$\frac{AP}{AB}$=$\frac{1}{2}$(We have proved$\frac{AP}{AB}$=$\frac{1}{2}\$)\begin{equation}ie, AR=\frac{1}{2}AD\end{equation} \begin{align}Area of \triangle APQ& = \frac{1}{2} X PQ X AR \\& = \frac{1}{2} X \frac{1}{2}BC X \frac{1}{2}AD (By using Equation 1 and 2) \\& = \frac{1}{4} X \frac{1}{2} X BC X AD \\& = \frac{1}{4} X Area of \triangle ABC \\\end{align}\end{document} Maybe you learned LaTeX from some old examples, which are not perfect. Reading a book or an introduction can help to get a better start, such as I meant above. Stefan
proofpile-shard-0030-190
{ "provenance": "003.jsonl.gz:191" }
# Henkin construction The method of constants was introduced by L. Henkin in 1949 [a1] to establish the strong completeness of first-order logic (cf. Completeness (in logic)). Whilst this method originally involved the deductive apparatus of first-order logic, it can be modified so as to employ only model-theoretic ideas (cf. Model (in logic); Model theory). Let $L$ be a first-order logical language with equality, and consider a set of sentences in $L$ which is finitely satisfiable in the sense that each of its finite subsets has a model. Since the collection of finitely satisfiable sets is closed under unions of chains, each such set can be extended to one which is maximal in the sense that it is finitely satisfiable and contains every sentence in $L$ or its negation. When $L$ contains constant terms, each maximal set in $L$ induces an equivalence relation on the set of constant terms: $(t,t')$ is in this relation provided that the equation $t=t'$ is a member of the maximal set. Let $[t]$ denote the equivalence class of $t$. An interpretation for $L$ can be constructed on the partition induced by this relation. On this interpretation, each individual constant in the non-logical vocabulary of $L$ denotes its equivalence class; $([t_1],\ldots,[t_n])$ is in the extension of the $n$-ary predicate $P$ if and only if the sentence $P(t_1,\ldots,t_n)$ is a member of the maximal set; and $([t_1],\ldots,[t_n],[t_{n+1}])$ is in the extension of the $n$-ary functional constant $g$ if and only if the equation $g(t_1,\ldots,t_n)=t_{n+1}$ is a member of the maximal set. This interpretation is a model of the maximal set if the set is term-complete in the sense that it contains an instance of each existential sentence it contains. This interpretation is called a Henkin model for the maximal and term-complete set. Not every first-order language contains constant terms. And even when $L$ contains constant terms, there are finitely satisfiable sets in $L$ which cannot be extended to maximal and term-complete sets in $L$. In such cases the Henkin construction proceeds by adding new constants to the non-logical vocabulary of $L$ in such a way that the finitely satisfiable set in $L$ can be extended to a maximal and term-complete set in the extended language. H.J. Keisler [a2] modified the Henkin construction at the point where the new constants are introduced. Let $I$ denote the collection of finite subsets of a finitely satisfiable set. For each member of $I$, choose a model of that set. $T$ is the family (indexed by $I$) of such models. Expand the non-logical vocabulary of $L$ by adding the members of the direct product of the domains of the members of $T$ as individual constants. Members of this direct product are functions on $I$ whose value at each $i\in I$ is a member of the domain of the $i$th member of $T$. The $i$th member of $T$ is expanded to interpretations of the extended language by having each function in the direct product denote its value at $i$. Let $T^*$ denote the resulting family of interpretations. The theory of $T^*$ is the set of all sentences in the extended language true on all members of $T^*$. The union of the theory of $T^*$ and the finitely satisfiable set from $L$ is itself finitely satisfiable, and any maximal extension of this union is term complete. The Henkin model for such a maximal extension is called a Henkin–Keisler model. Generalizing the above, let $I$ be any non-empty set and let $T$ be a family of interpretations for $L$ indexed by $I$. As above, expand the non-logical vocabulary of $L$ by adding the direct product of the domains of the members of $T$ as individual constants, expand the members of $T$ to interpretations of the extended language as above, and let $T^*$ denote the resulting family of interpretations. The theory of $T^*$ is finitely satisfiable and any maximal extension of this set is term complete. Henkin–Keisler models can be seen as both a specialization of the Henkin construction and as an alternative to the ultraproduct construction. There is a natural correspondence between maximal extensions of the theory of $T^*$ and ultrafilters on $I$. Associate with each sentence in the expanded language the set of indices (from $I$) of those members of $T^*$ on which the sentence is true. Given an ultrafilter on $I$, consider the set of sentences in the extended language whose associated set of indices is a member of the ultrafilter. This set is a maximal extension of the theory of $T^*$. Further, if all members of $T$ are non-trivial in the sense that their domains contain at least two objects, then given any maximal extension of the theory of $T^*$, the collection of sets of indices associated with the members of the maximal extension is an ultrafilter on $I$. Finally, the Henkin–Keisler model of any maximal extension of the theory of $T^*$, when restricted to $L$, is isomorphic to the ultraproduct of the members of $T$ over the corresponding ultrafilter. #### References [a1] L. Henkin, "The completeness of the first-order functional calculus" J. Symb. Logic , 14 (1949) pp. 159–166 [a2] H.J. Keisler, "A survey of ultraproducts, logic" Y. Bar-Hillel (ed.) , Logic, Methodology and Philosophy of Science , North-Holland (1965) pp. 112–126 [a3] G. Weaver, "Henkin–Keisler models" , Kluwer Acad. Publ. (1997) How to Cite This Entry: Henkin construction. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Henkin_construction&oldid=37148 This article was adapted from an original article by G. Weaver (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
proofpile-shard-0030-191
{ "provenance": "003.jsonl.gz:192" }
# Absolute Value Related Topics: More Lessons for Intermediate Algebra Math Worksheets A series of free Intermediate Algebra Lessons. What are the properties of absolute value? 1. |x| always gives a positive result (or 0) 2. |-x| = |x| 3. |x • y| = |x| • |y| 4. $$\left| {\frac{x}{y}} \right| = \frac{{|x|}}{{|y|}}$$ Be careful: |x + y| ≠ |x| + |y| |x - y| ≠ |x| - |y| Properties of Absolute Value How to evaluate expressions involving absolute value? Evaluating Expressions Involving Absolute Value - Example 1 Evaluating Expressions Involving Absolute Value - Example 2 Evaluating Expressions Involving Absolute Value - Example 3 With fractional expressions. Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
proofpile-shard-0030-192
{ "provenance": "003.jsonl.gz:193" }
matrix-nonsingular-10seconds More from my site • Finite Order Matrix and its Trace Let $A$ be an $n\times n$ matrix and suppose that $A^r=I_n$ for some positive integer $r$. Then show that (a) $|\tr(A)|\leq n$. (b) If $|\tr(A)|=n$, then $A=\zeta I_n$ for an $r$-th root of unity $\zeta$. (c) $\tr(A)=n$ if and only if $A=I_n$. Proof. (a) […] • Ring of Gaussian Integers and Determine its Unit Elements Denote by $i$ the square root of $-1$. Let $R=\Z[i]=\{a+ib \mid a, b \in \Z \}$ be the ring of Gaussian integers. We define the norm $N:\Z[i] \to \Z$ by sending $\alpha=a+ib$ to $N(\alpha)=\alpha \bar{\alpha}=a^2+b^2.$ Here $\bar{\alpha}$ is the complex conjugate of […] • Find Values of $h$ so that the Given Vectors are Linearly Independent Find the value(s) of $h$ for which the following set of vectors $\left \{ \mathbf{v}_1=\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} h \\ 1 \\ -h \end{bmatrix}, \mathbf{v}_3=\begin{bmatrix} 1 \\ 2h \\ 3h+1 […] • Inner Product, Norm, and Orthogonal Vectors Let \mathbf{u}_1, \mathbf{u}_2, \mathbf{u}_3 are vectors in \R^n. Suppose that vectors \mathbf{u}_1, \mathbf{u}_2 are orthogonal and the norm of \mathbf{u}_2 is 4 and \mathbf{u}_2^{\trans}\mathbf{u}_3=7. Find the value of the real number a in […] • Determine the Quotient Ring \Z[\sqrt{10}]/(2, \sqrt{10}) Let \[P=(2, \sqrt{10})=\{a+b\sqrt{10} \mid a, b \in \Z, 2|a\}$ be an ideal of the ring $\Z[\sqrt{10}]=\{a+b\sqrt{10} \mid a, b \in \Z\}.$ Then determine the quotient ring $\Z[\sqrt{10}]/P$. Is $P$ a prime ideal? Is $P$ a maximal ideal?   Solution. We […] • Equivalent Definitions of Characteristic Subgroups. Center is Characteristic. Let $H$ be a subgroup of a group $G$. We call $H$ characteristic in $G$ if for any automorphism $\sigma\in \Aut(G)$ of $G$, we have $\sigma(H)=H$. (a) Prove that if $\sigma(H) \subset H$ for all $\sigma \in \Aut(G)$, then $H$ is characteristic in $G$. (b) Prove that the center […] • Eigenvalues of a Matrix and its Transpose are the Same Let $A$ be a square matrix. Prove that the eigenvalues of the transpose $A^{\trans}$ are the same as the eigenvalues of $A$.   Proof. Recall that the eigenvalues of a matrix are roots of its characteristic polynomial. Hence if the matrices $A$ and $A^{\trans}$ […] • Characteristic Polynomials of $AB$ and $BA$ are the Same Let $A$ and $B$ be $n \times n$ matrices. Prove that the characteristic polynomials for the matrices $AB$ and $BA$ are the same. Hint. Consider the case when the matrix $A$ is invertible. Even if $A$ is not invertible, note that $A-\epsilon I$ is invertible matrix […]
proofpile-shard-0030-193
{ "provenance": "003.jsonl.gz:194" }
Set a time based on hh:mm length, or set time length based on time I am using Ionic 3, Angular 5, and moment-timezone. I have a form input that displays EndTime (in the format of 01:23 AM) and another that display the length (in the format of 01:30 which represents hh:mm) that is calculated from the mentioned EndTime and the StartTime both of which are DateTime values. When the user changes the EndTime I need it to update the length, and when the user updates the length I need it to calculate the end time. I feel like I am overcomplicating this and making the function too long. Anyway to improve this? setSessionEndTime() { let time = this.sessionForm.controls['sessLength'].value.split(':'); const newEndTime = (moment(endTime).toISOString()); this.sessionForm.controls['sessEndTime'].setValue(newEndTime, {emitEvent: false}); this.endTime = newEndTime; } setSessionLength() { const startTime = moment(this.startTime); const endTime = moment(this.endTime); let totalHours = (endTime.diff(startTime, 'hours')); const totalMinutes = endTime.diff(startTime, 'minutes'); const clearMinutes = totalMinutes % 60; // Must check if value is less than 10 and prepend 0 to it otherwise the datetime input doesnt accept the value const sessLength: string = (totalHours < 10) ? 0${totalHours}:${clearMinutes} : ${totalHours}:${clearMinutes}; this.defaultSesLength = sessLength; this.sessionForm.controls['sessLength'].setValue(sessLength, {emitEvent: false}); } • How are setSessionEndTime and setSessionLength being triggered? Mar 27 '18 at 2:04
proofpile-shard-0030-194
{ "provenance": "003.jsonl.gz:195" }
# ConstraintSolver.jl Thanks for checking out the documentation of this constraint solver. The documentation is written in four different sections based on this post about how to write documentation. • If you want to get a quick overview and just have a look at examples check out the tutorial. • You just have some How to questions? -> How to guide • Which constraints and objectives are supported? -> Supported constraints/objectives • What solver options do exist? -> Solver options • You want to understand how it works deep down? Maybe improve it ;) -> Explanation • Gimme the code documentation directly! The reference section got you covered (It's not much currently) If you have some questions please feel free to ask me by making an issue. You might be interested in the process of how I coded this: Checkout the full process on my blog opensourc.es.
proofpile-shard-0030-195
{ "provenance": "003.jsonl.gz:196" }
help-texinfo [Top][All Lists] ## Re: [help-texinfo] Formatting syntax rules From: Karl Berry Subject: Re: [help-texinfo] Formatting syntax rules Date: Mon, 6 Dec 2004 19:06:40 -0500 Hi Laurance, would this cause any problems with respect to copyright? Nah. Knuth first wrote all that stuff long before "free software" (let alone "open source") and the GPL came into existence. As a result, many of his files are not legalistically pristine as the OSI would like. But statements to these old files instead of working on volume 4. As far as manmac.tex goes, I'm sure the same conditions as plain.tex apply, which is that anything goes as long as you don't distribute a modified under the same name. Which you wouldn't be doing. So that's fine. A-W's copyright applies specifically to texbook.tex, not manmac.tex. Re @quotation, it's not important, but I still don't understand your objection to using it (if it helps). It hasn't ever essentially changed since day 1, and I can't imagine it changing now. Whatever. Re adding support for syntax rules, I wasn't thinking of defining a new environment (I'm not sure if that's what you had in mind), but rather just one new markup command, say @bnfvar, which would output $\langle$argument$\rangle$. Is that enough, or would you want more? By the way, is there a particular reason why texi2dvi' can generate Um, I'm a bit confused. texi2dvi doesn't generate any menus or links, while makeinfo' needs to have them in the files? I guess you are referring to the prev/next/up pointers on @node lines. It is true that makeinfo needs to have @node lines with the node name, and the menus listing the subsections. However, given those, it is not necessary to write the prev/next/up pointers on the @node lines, makeinfo can deduce them (assuming a normally structured manual). This is, in fact, the recommended way to write Texinfo sources. It's described in the "makeinfo Pointer Creation" node of the Texinfo manual, among other places. There are Emacs commands for generating/updating menus (described in the chapter on the Emacs mode), although I rarely use them myself, I usually just write them in by hand. Best, karl
proofpile-shard-0030-196
{ "provenance": "003.jsonl.gz:197" }
# Analysis of the clusters¶ Here is a streamlined code that shows just analysis without blind alleys I encountered before. In [1]: #general import numpy as np import scipy from matplotlib import pyplot as plt %pylab inline import pandas as pd import MySQLdb import os import sys sys.setrecursionlimit(3000) Populating the interactive namespace from numpy and matplotlib In [4]: con=MySQLdb.connect(user=user, passwd=passwd, db=dbname, host=host) data loaded The low activity of some businesses in their particular cluster caused their Location Quotient to be marked as inf because of the computing roundup. The trick is with the LQ equation. Here you can see the equation again. $$LQ_{ij}=\frac{\frac{E_{ij}}{E_i}}{\frac{\sum_i E_{ij}}{\sum_i E_i}}$$ Where $E_{ij}$ is economic activity in subarea i, department j $E_i$ is total economic activity in subarea i $\sum_i E_{ij}$ is economic activity of department j in the whole area $\sum_i E_i$ is total economic activity in the whole area So when a business has low activity, the real value of nominator is of an order of $10^{-6}$. That is small enough number that my Python code on computer decides that the value is so close to the zero and thus can be zero; causing division to produce infinity. This creates a problem to me since, in reality, those businesses are not the most popular, they are the least popular in the cluster. Therefore, I decided to replace all infinity values with the minimum value of Location Quotient. Yes, this will assign a higher value of LQ to those unpopular businesses, but in clusters that have more than four business each, such low visited businesses do not even come into future calculations. In [4]: i=0 for t in df.LQ: if t == np.inf: df.LQ.loc[i]=df.LQ.min() i=i+1 /Users/Lexa/anaconda/lib/python2.7/site-packages/pandas/core/indexing.py:118: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy self._setitem_with_indexer(indexer, value) Let us see individual cluster. In [9]: lqCluster=df[df.cluster ==1] plt.figure(figsize=(10, 10), dpi=100) df_scatter = plt.scatter(lqCluster['longitude'], lqCluster['latitude'], c='b', alpha=.5, s=lqCluster['LQ']*10) plt.title( 'LQ in cluster', fontsize=20) plt.xlabel('Longitude', fontsize=18) plt.ylabel('Latitude', fontsize =18) plt.xlim(lqCluster.longitude.min()-0.002,lqCluster.longitude.max()+0.002) plt.ylim(lqCluster.latitude.min()-0.002,lqCluster.latitude.max()+0.002) plt.show() ## Determining the enviroment¶ Now is time to determine which businesses are carrying economic activity in each cluster. I also wish to see how those categories relate to the most common category of the cluster. I used multiple comparisons, so that needs to be corrected. I used a method called: ## Sidak correction¶ It is a simple method to control the familywise error rate that is probabilistically exact when the individual tests are independent of each other but is conservative otherwise. The test is used if the test statistics are independent of each other then testing each of m hypotheses at level $$\alpha_{SID} = 1-(1-\alpha)^\frac{1}{m}$$ is Sidak's multiple testing procedure. This test is more powerful than Bonferroni, but the gain is small: for $\alpha_{SID} = 0.05$ and $m= 10$ and $10^{12}$, Bonferroni vs Sidak give 0.005 and 5 $10^{-14}$ vs 0.005116 and $5.129 10^{-14}$, respectively. The main merit of the correction is that it is exact probabilistically when the tests are independent of each other. Bonferroni is an easier approximate way to calculate the Sidak correction. The Šidák correction is derived by assuming that the individual tests are independent. Let the significance threshold for each test be $\alpha_1$; then the probability that at least one of the tests is significant under this threshold is (1 - the probability that none of them is significant). Since it is assumed that they are independent, the probability that all of them are not significant is the product of the probabilities that each of them are not significant, or $1 - (1 - \alpha_1)^n$. Our intention is for this probability to equal \alpha, the significance level for the entire series of tests. By solving for $\alpha_1$, we obtain $\alpha_1 = 1 - (1 - \alpha)^{1/n}$. ### Helper functions¶ In [7]: def getTopCategories(newcat,listCat,pomDF): busi= pomDF.name.ix[pomDF.categories == newcat].values if len(busi)>0: for b in busi: rcat = pom.categories.ix[pom.name == b].values if len(rcat)>0: for t in rcat: if t in listCat: pass else: listCat.append(t) return listCat In [8]: def SortingBusinessCategories(categ,pomDF): busR = {} for rc in categ: rcat=rc.flatten().tolist()[0] busR.update({rcat: pomDF.name[pomDF.categories == rcat].count()}) sortedBus = sorted(busR.items(), key=operator.itemgetter(1), reverse=True) return sortedBus In [9]: def getCategoryStats(pomDF, categ): RcatList= arrayToList(categ) pom2 = pomDF.loc[pomDF['categories'].isin(RcatList)] Large = pom2.LQ.max() Bus = pom2.name.ix[pom2.LQ == Large].values[0] BusCat = pom2.categories.ix[(pom2.name == Bus) & (pom2.LQ == Large)].values[0] CatNum = pom2.name[pom2.categories == BusCat].count() In [10]: def arrayToList(array): i=0 newlist=[] for r in array: if type(r)==np.ndarray: st=r[0] newlist.append(st) return newlist ## Performing analysis for some other business in the cluster¶ However, I cannot answer the question where to put stand-alone business, not with just Yelp data. To answer that question, I would need an additional data source. The question I can answer is can some business play well together. I'll pick experimental category and run with it. You can see other examples I tried. I have to give up on those, mostly because there was no more than five different clusters where that business appeared. In that case, I would recommend not to use this analysis as determination, but different factors, different data. In [13]: #print df.categories.unique() targetCategory ="Coffee & Tea"#"Bookstores" # "Taxis" #"Rugs"# if targetCategory in df.categories.unique(): print 'yeah' yeah OK, so there is something connected with Coffee. Next step let us see how many clusters have this category. In [14]: pom=df[df['categories']== targetCategory] pom2=pom.cluster.unique().tolist() #print pom2 clustersDF = df.loc[df['cluster'].isin(pom2)] targetCategoryClusters=clustersDF.cluster.unique() print len(targetCategoryClusters) #print clustersDF.cluster.unique() 139 Neat, over 100 clusters. Next step is to determine the top four businesses in each category and get LQ of targeted category for each cluster. In [15]: df3=pd.DataFrame(df.cluster.unique()) df3['BusNum']=0 df3['topBusCat']='' df3['topCatNum']=0 df3['LQmax']=0 df3['cat1']='' df3['LQ2']=0 df3['cat1num']=0 df3['cat2']='' df3['LQ3']=0 df3['cat2num']=0 df3['cat3']='' df3['LQ4']=0 df3['cat3num']=0 df3['targetLQ']=0 df3 = df3.rename(columns={0: 'cluster'}) In [16]: import operator for c in clustersDF.cluster.unique(): pom = clustersDF[clustersDF.cluster == c] large = pom.LQ.max() topBus = pom.name.ix[pom.LQ == large].values[0] topBusCat = pom.categories.ix[(pom.name == topBus) & (pom.LQ == large)].values[0] df3.topBusCat.loc[df3.cluster == c] = topBusCat df3.LQmax.loc[df3.cluster == c]=large bNum=pom.name.unique() df3.BusNum.loc[df3.cluster == c]=len(bNum) topCatStart=[] topCat = getTopCategories(topBusCat, topCatStart, pom) topCatNum =pom.name[pom.categories == topBusCat].count() df3.topCatNum.loc[df3.cluster ==c]=topCatNum df3.targetLQ.loc[df3.cluster ==c]=pom.LQ[(pom.categories == targetCategory) & (pom.name != topBus)].max() cat = pom.categories.unique() Rcat = cat[np.argwhere(np.in1d(cat,np.intersect1d(cat,topCat))==False)] if len(Rcat)>0: ans=getCategoryStats(pom,Rcat) df3.cat1.loc[df3.cluster == c] = ans[1] df3.cat1num.loc[df3.cluster == c] = ans[2] df3.LQ2[df3.cluster == c] = ans[0] topCate = getTopCategories(ans[1], topCat, pom) Rcat1 = cat[np.argwhere(np.in1d(cat,np.intersect1d(cat,topCate))==False)] if len(Rcat1)>0: ans2=getCategoryStats(pom,Rcat1) df3.cat2.loc[df3.cluster == c] = ans2[1] df3.cat2num.loc[df3.cluster == c] = ans2[2] df3.LQ3[df3.cluster == c] = ans2[0] topCateg = getTopCategories(ans2[1], topCate, pom) Rcat2 = cat[np.argwhere(np.in1d(cat,np.intersect1d(cat,topCateg))==False)] if len(Rcat2)>0: ans3=getCategoryStats(pom,Rcat2) df3.cat3.loc[df3.cluster == c] = ans3[1] df3.cat3num.loc[df3.cluster == c] = ans3[2] df3.LQ4[df3.cluster == c] = ans3[0] /Users/Lexa/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:22: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy /Users/Lexa/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:29: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy /Users/Lexa/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:36: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy Out[16]: cluster BusNum topBusCat topCatNum LQmax cat1 LQ2 cat1num cat2 LQ3 cat2num cat3 LQ4 cat3num targetLQ 0 3 8637 Beer Bar 1 1.770652 Religious Schools 1.770652 1 Horse Racing 1.770652 1 Hearing Aid Providers 1.770652 1 0.076894 1 0 853 Fireplace Services 1 39.935685 Fishing 39.935685 1 Fireworks 39.935685 1 Chimney Sweeps 32.674652 1 0.141500 2 828 0 0 0.000000 0.000000 0 0.000000 0 0.000000 0 0.000000 3 919 0 0 0.000000 0.000000 0 0.000000 0 0.000000 0 0.000000 4 18 995 Ski Resorts 1 21.027324 Bistros 21.027324 1 Rugs 21.027324 1 Cooking Classes 21.027324 1 0.147098 Let us determine is our category in one of the 4 four here. If it is, then we can proceed with the analysis. In [17]: allCatIndf3=df3.topBusCat.unique().tolist()+df3.cat1.unique().tolist()+df3.cat2.unique().tolist()+df3.cat3.unique().tolist() uniqCatDF3= set(allCatIndf3) if targetCategory in df.categories.unique(): print 'yeah' yeah The goal is to find business that preform well. When business is in one off the four most popular in the cluster, it si more probable that I will find a positive influence in that cluster. Taking all clusters with targeted business in consideration will drown the signal with the noise. In [18]: import operator from scipy.stats import kendalltau par=[] for c in df3.topBusCat.unique(): if c == targetCategory: par.append(df3.topBusCat[df3['topBusCat']==c].tolist()[0]) for c in df3.cat1.unique(): if c == targetCategory: par.append(df3.topBusCat[df3['cat1']==c].tolist()[0]) for c in df3.cat2.unique(): if c == targetCategory: par.append(df3.topBusCat[df3['cat2']==c].tolist()[0]) for c in df3.cat3.unique(): if c == targetCategory: par.append(df3.topBusCat[df3['cat3']==c].tolist()[0]) print par ['Coffee & Tea', 'Gas & Service Stations', 'Gastropubs'] Now with these categories, we'll find clusters where there are target category and one of the listed categories and tests with Spearman correlation how they influence coffee shops. # Spearman's rank correlation coefficient¶ It is often denoted by the Greek letter $\rho$ (rho) or as $r_s$, is a nonparametric measure of statistical dependence between two variables. It assesses how well the relationship between two variables can be described using a monotonic function. If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other. Spearman's coefficient, like any correlation calculation, is appropriate for both continuous and discrete variables, including ordinal variables. Spearman's $\rho$ and Kendall's $\tau$ can be formulated as special cases of a more a general correlation coefficient. The Spearman correlation coefficient is defined as the Pearson correlation coefficient between the ranked variables.For a sample of size n, the n raw scores $X_i$, $Y_i$ are converted to ranks $x_i$, $y_i$, and $\rho$ is computed from: $$\rho = {1- \frac {6 \sum d_i^2}{n(n^2 - 1)}}$$ where $d_i = x_i - y_i$, is the difference between ranks. And for those who do not know: ## Pearson product-moment correlation coefficient¶ The Pearson product-moment correlation coefficient (sometimes referred to as the PPMCC or PCC or Pearson's r) is a measure of the linear correlation (dependence) between two variables X and Y, giving a value between +1 and −1 inclusive, where 1 is total positive correlation, 0 is no correlation, and −1 is total negative correlation. It is widely used in the sciences as a measure of the degree of linear dependence between two variables. Pearson's correlation coefficient is the covariance of the two variables divided by the product of their standard deviations. The form of the definition involves a "product moment", that is, the mean (the first moment about the origin) of the product of the mean-adjusted random variables; hence the modifier product-moment in the name. Pearson's correlation coefficient when applied to a population is commonly represented by the Greek letter $\rho$ (rho) and may be referred to as the population correlation coefficient or the population Pearson correlation coefficient. The formula is: $$\rho_{X,Y}= \frac{\operatorname{cov}(X,Y)}{\sigma_X \sigma_Y}$$ where: $\operatorname{cov}$ is the covariance $\sigma_X$ is the standard deviation of X In [19]: resul=pd.DataFrame(range(100)) resul['category']='' resul['Spearman']=0. resul['P']=0. resul['sidak']=0. In [20]: import operator from scipy.stats import spearmanr i=0 for ca in par: tarClustDF=clustersDF[clustersDF['categories']==ca] pomList=tarClustDF.cluster.unique().tolist() clustersDF2 = clustersDF.loc[clustersDF['cluster'].isin(pomList)] if len(tarClustDF)>5: para=[] resu=[] for c in pomList: para.append(clustersDF2.LQ[(clustersDF2['categories']==ca) & (clustersDF2['cluster']==c) ].max()) resu.append(clustersDF2.LQ[(clustersDF2['categories']==targetCategory) & (clustersDF2['cluster']==c)].max()) cou=len(par) spear=spearmanr(para,resu) sida=1.-(1.-spear[1])**cou resul.category[i]=ca resul.Spearman[i]=spear[0] resul.P[i]=spear[1] resul.sidak[i]=sida i+=1 print ca, '->', spear[0],'p=',spear[1],'Sidak correction:', sida /Users/Lexa/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:17: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy /Users/Lexa/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:18: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy /Users/Lexa/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:19: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy Coffee & Tea -> 1.0 p= 0.0 Sidak correction: 0.0 Gas & Service Stations -> 0.541412272587 p= 0.00010216046362 Sidak correction: 0.000306450081646 Gastropubs -> 0.928571428571 p= 0.00251947240379 Sidak correction: 0.00753938998076 /Users/Lexa/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:20: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy In [21]: result=resul[resul['sidak'] < 0.1] ### Validation¶ Using the same random pick of the categories In [106]: import random category = random.sample(df.LQ,15) LQrandom = random.sample(df.LQ, 15) spear = spearmanr(category, LQrandom) print spear[0], spear[1] 0.0857142857143 0.761334126191 Shows there is no correlation between randomly chosen categories. Correlation is close to 0, and the P value is way too high. ## let's make a map¶ First we need to sort out which Spearman coefficient is significant, larger than 0.5 and then sort which one is negative and which one is positive. In [22]: negCat=result.category[result['Spearman']<-0.5].tolist() posCat=result.category[result['Spearman']>0.5].tolist() Then we have to get clusters that have a positive influence business and do not have negative influence business. In case that one of the categories is empty, it is skipped. The rest of the clusters are treated as neutral. Meaning, there, a personal performance of a business, together with unaccounted for factors will have the greatest influence. In [23]: allClust=df.cluster.unique() goodList=[] neutralList=[] if len(posCat)>0: goodf= df.loc[df['categories'].isin(posCat)] good=goodf.cluster.unique()#.tolist() withGood =allClust[np.argwhere(np.in1d(allClust,np.intersect1d(allClust,good))==True)] goodList=arrayToList(withGood) if len(negCat)>0: Now we get coordinates for the good and neutral clusters. In [24]: if (len(goodList)==0) & (len(neutralList)==0): print 'No good places for business. Try put it as stand alone business.' if len(goodList)>0: coordGood = pd.DataFrame(goodList) coordGood = coordGood.rename(columns={0: 'cluster'}) coordGood['lati']=0 coordGood['longi']=0 coordGood['ratioNbus']=0 coordGood['LQmean']=0 coordGood['scale']=0 if len(neutralList)>1: coordNeutral = pd.DataFrame(neutralList) coordNeutral = coordNeutral.rename(columns={0: 'cluster'}) coordNeutral['lati']=0 coordNeutral['longi']=0 coordNeutral['ratioNbus']=0 coordNeutral['LQmean']=0 coordNeutral['scale']=0 In [25]: from __future__ import division if len(goodList)>0: for c in goodList: pom4= df[df.cluster == c] NumBus=len(pom4.name.unique()) NumGoodBus = pom4.name[pom4.categories == cat[0]].count() ratio=NumGoodBus/NumBus meanLQ = pom4.LQ[pom4.categories==cat[0]].mean() lati =pom4.latitude.mean() longi =pom4.longitude.mean() scal = ratio*meanLQ coordGood.lati.loc[coordGood.cluster == c] = lati coordGood.longi.loc[coordGood.cluster == c] = longi coordGood.ratioNbus.loc[coordGood.cluster == c] = ratio coordGood.LQmean.loc[coordGood.cluster == c] = meanLQ coordGood.scale.loc[coordGood.cluster == c] = scal if len(neutralList)>1: for c in neutralList: pom4= df[df.cluster == c] NumBus=len(pom4.name.unique()) NumGoodBus = pom4.name[pom4.categories == cat[0]].count() ratio=NumGoodBus/NumBus meanLQ = pom4.LQ[pom4.categories==cat[0]].mean() lati =pom4.latitude.mean() longi =pom4.longitude.mean() scal = ratio*meanLQ coordNeutral.lati.loc[coordNeutral.cluster == c] = lati coordNeutral.longi.loc[coordNeutral.cluster == c] = longi coordNeutral.ratioNbus.loc[coordNeutral.cluster == c] = ratio coordNeutral.LQmean.loc[coordNeutral.cluster == c] = meanLQ coordNeutral.scale.loc[coordNeutral.cluster == c] = scal Now we can plot the result. Let us first check the sanity of our coordinates. In [26]: print goodList, neutralList [3, 0, 18, 25, 9, 236, 248, 21, 48, 4, 49, 80, 541, 39, 52, 167, 55, 100, 110, 10, 47, 5, 254, 75, 35, 16, 19, 507, 189, 78, 269, 6, 28, 30, 23, 390, 206, 322, 130, 99, 298, 125, 59, 94, 149, 232, 200, 74, 304, 77, 491, 427, 136, 27, 355, 51, 283, 7, 98, 161, 82, 58, 152, 92, 267, 11, 44, 24, 119, 228, 29, 205, 181, 135, 243, 33, 286, 45, 112, 61, 565, 40, 102, 230, 2, 291, 151, 109, 188, 302, 290, 20, 62, 178, 116, 281, 498, 14, 159, 142, 321, 199, 327, 72, 106, 81, 22, 218, 68, 50, 1, 115, 12, 13, 144, 154, 477, 192, 104, 103, 174, 15, 65, 414, 191, 141, 96, 584, 655, 225, 122, 209, 26, 426, 86, 128, 198, 255, 702, 163, 168, 471, 306, 31, 37, 8, 264, 299, 220, 244, 107, 706, 242, 145, 34, 180, 182, 138, 56, 36, 233, 190, 312, 308, 195, 217, 87, 261, 359, 193, 921, 554, 644, 902, 621, 666, 97, 449] [] In [27]: def intersect(a,b): return list(set(a) & set(b)) In [28]: figsize(15, 3) if len(goodList)>0: tornTupleG = coordGood.cluster.tolist() latsG = coordGood.lati.tolist() lonsG = coordGood.longi.tolist() #scalesG = coordGood.scale.tolist() subplot(141) title("Distribution of good Latitudes"); hist(latsG, bins=20); subplot(142) title("Distribution of good Longitudes"); hist(lonsG, bins=20); if len(neutralList)>0: tornTupleN = coordNeutral.cluster.tolist() latsN = coordNeutral.lati.tolist() lonsN = coordNeutral.longi.tolist() #scalesN = coordNeutral.scale.tolist() subplot(141) title("Distribution of neutral Latitudes"); hist(latsN, bins=20); subplot(142) title("Distribution of neutral Longitudes"); hist(lonsN, bins=20); Let us plot the result using Folium. In [29]: from IPython.display import HTML import folium def inline_map(map): """ Embeds the HTML source of the map directly into the IPython notebook. This method will not work if the map depends on any files (json data). Also this uses the HTML5 srcdoc attribute, which may not be supported in all browsers. """ map._build_map() return HTML('<iframe srcdoc="{srcdoc}" style="width: 100%; height: 310px; border: none"></iframe>'.format(srcdoc=map.HTML.replace('"', '&quot;'))) In [30]: meanlat=df.latitude.mean() meanlong=df.longitude.mean() map = folium.Map(width=600,height=600,location=[meanlat,meanlong], zoom_start=10) if len(goodList)>0: for i in range(len(tornTupleG)): map.simple_marker([latsG[i], lonsG[i]], popup=str(tornTupleG[i])+' Reccomended',marker_color='green',marker_icon='ok-sign') if len(neutralList)>0: for i in range(len(tornTupleG)): map.simple_marker([latsN[i], lonsN[i]], popup=str(tornTupleN[i])+' Neutral',marker_color='blue',marker_icon='ok-sign') inline_map(map) Out[30]: In [34]: i=92 hpom=df[df.cluster == i] large = hpom.LQ.max() topBus = hpom.name.ix[hpom.LQ == large].values[0] topBusCat = hpom.categories.ix[(hpom.name == topBus) & (hpom.LQ == large)].values[0] bus=hpom.categories.unique() print i, topBusCat print bus 92 Skate Parks ['Local Services' 'Dry Cleaning & Laundry' 'Optometrists' 'Health & Medical' 'Beauty & Spas' 'Nail Salons' 'Home Services' 'Real Estate' 'Apartments' 'Food' 'Grocery' 'Shipping Centers' 'Printing Services' 'Notaries' 'Chiropractors' 'Fast Food' 'Sandwiches' 'Restaurants' 'Sushi Bars' 'Japanese' 'Donuts' 'Web Design' 'Marketing' 'Graphic Design' 'Professional Services' 'Nightlife' 'Irish' 'Bars' 'Sports Bars' 'Libraries' 'Public Services & Government' 'Landscaping' 'Veterinarians' 'Pets' 'Pizza' 'Automotive' 'Gas & Service Stations' 'Plumbing' 'Active Life' 'Skate Parks' 'Swimming Pools' 'Parks' 'Fitness & Instruction' 'Fashion' 'Shopping' 'Hair Salons' "Women's Clothing" 'Day Spas' 'Laser Hair Removal' 'Hair Removal' 'Medical Spas' 'Real Estate Agents' 'Juice Bars & Smoothies' 'Gyms' 'Trainers' 'Lounges'] In [ ]:
proofpile-shard-0030-197
{ "provenance": "003.jsonl.gz:198" }
Cold Matters Our current best estimate for the age of the universe (as we know it) is 13.799 Gyr ± 21 Myr. The Great Flaring Forth (GFF) occurred 13.8 billion years ago, and the universe has been expanding and cooling ever since. The background temperature of the universe is today 2.72548 ± 0.00057 K. “K” stands for Kelvin, a unit of temperature named after William Thomson, 1st Baron Kelvin (1824-1907) – Lord Kelvin – who championed the idea of an “absolute thermometric scale”. A temperature in Kelvin is equivalent to the number of Celsius degrees above absolute zero. Put into terms we may be more familiar with, the cosmic background temperature is -270.42452° C, or -454.764136° F. While in the absence of nearby stars or other energy sources, the universe is certainly cold, scientists have artificially produced temperatures as low as 100 pK (1 picoKelvin = 10-12 K). Using Wien’s displacement law, we can calculate the wavelength of electromagnetic radiation where the background universe is brightest. $\lambda _{max}=\frac{2.8977729\ \pm \ 0.0000017\ mm\cdot K}{T_{K}}=\frac{2.8977729\ \pm \ 0.0000017\ mm\cdot K}{2.72548\ \pm\ 0.00057 K}=\\1.0632\pm0.0002\ mm$ So, we see here that the background universe is “brightest” in the microwave part of the radio spectrum, at a peak wavelength around 1 mm. Using the relationship between frequency and wavelength, c = νλ, we can determine the microwave frequency where the background universe is brightest. $\nu =\frac{c}{\lambda }=\frac{299,792,458\ m/s}{1.0632\times 10^{-3}\ m}=281.97\pm 0.05\ GHz$ Microwaves at this frequency are in the extremely high frequency (EHF) radio band, above all our allocated communications bands (275-3000 GHz is unallocated). Of course, a significant amount of emission occurs either side of the peak, particularly at longer wavelengths and lower frequencies. (The background universe radiates with an almost perfect blackbody spectrum.) There are several ways to define the wavelength/frequency of maximum brightness. The above is one. Depending on the method we choose, the peak wavelength lies between 1.0623 and 3.313 mm, and the peak frequency between 90.5 and 282.0 GHz.
proofpile-shard-0030-198
{ "provenance": "003.jsonl.gz:199" }
Seminar Calendar for events the day of Friday, March 9, 2012. . events for the events containing (Requires a password.) More information on this calendar program is available. Questions regarding events or the calendar should be directed to Tori Corkery. February 2012 March 2012 April 2012 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 1 2 3 1 2 3 4 5 6 7 5 6 7 8 9 10 11 4 5 6 7 8 9 10 8 9 10 11 12 13 14 12 13 14 15 16 17 18 11 12 13 14 15 16 17 15 16 17 18 19 20 21 19 20 21 22 23 24 25 18 19 20 21 22 23 24 22 23 24 25 26 27 28 26 27 28 29 25 26 27 28 29 30 31 29 30 Friday, March 9, 2012 Model Theory and Descriptive Set Theory Seminar 4:00 pm   in 347 Altgeld Hall,  Friday, March 9, 2012 Del Edit Copy Submitted by phierony. Lou van den Dries (Department of Mathematics, University of Illinois at Urbana-Champaign)The structure of approximate groups according to Breuillard, Green, Tao.Abstract: Roughly speaking, an approximate group is a finite symmetric subset A of a group such that AA can be covered by a small number of left-translates of A. Last year the authors mentioned in the title established a conjecture of H. Helfgott and E. Lindenstrauss to the effect that approximate groups are finite-by-nilpotent''. This may be viewed as a sweeping generalisation of both the Freiman-Ruzsa theorem on sets of small doubling in the additive group of integers, and of Gromov's characterization of groups of polynomial growth. Among the applications of the main result are a finitary refinement of Gromov's theorem and a generalized Margulis lemma conjectured by Gromov. Prior work by Hrushovski on approximate groups is fundamental in the approach taken by the authors. They were able to reduce the role of logic to elementary arguments with ultra products. The point is that an ultraproduct of approximate groups can be modeled in a useful way by a neighborhood of the identity in a Lie group. This allows arguments by induction on the dimension of the Lie group. I will give two talks: the one on Tuesday (1pm in 345 AH) will describe the main results, and the sequel on Friday (4pm in 347 AH) will try to give a rough idea of the proofs.
proofpile-shard-0030-199
{ "provenance": "003.jsonl.gz:200" }
## User: Mthabisi Moyo Reputation: 10 Status: New User Location: Last seen: 2 days, 4 hours ago Joined: 2 years, 6 months ago Email: m***************@u.northwestern.edu #### Posts by Mthabisi Moyo <prev • 11 results • page 1 of 2 • next > 0 35 views 0 ... I am using Gviz to plot H3K27Ac and RNAPII ChIP-seq data for two experimental conditions, Wt and KO: WT_H3K27Ac <- DataTrack(range = '/path/to/WT-H3K27Ac.bw', type = "histogram", window = -1, name = "WT H3K27Ac", genome = "GRCh38", col.histogram = "black", fill.histogram = "black") KO_H3K27Ac &l ... written 6 days ago by Mthabisi Moyo10 1 74 views 1 ... Given that the PCA plot is likely to change somewhat depending on the number of genes you decide to specify with the ntop parameter, are there any recommendations on how to best set this value besides arbitrarily setting it at the default of 500/1000? Could including all genes have a negative effect ... written 8 weeks ago by Mthabisi Moyo10 • updated 8 weeks ago by Michael Love19k 0 57 views 0 ... I have performed ChIP-seq for multiple transcription factors on samples from multiple patients. All ChIPs for each patient where sequenced on different sequencing runs (i.e. TF1, TF2, TF3 for patient 1 was on one flowcell and  TF1, TF2, TF3 for patient 2 was on a separate flowcell etc). I am looking ... written 9 weeks ago by Mthabisi Moyo10 0 77 views 0 ... I have been trying to change font sizes for my box plots in DiffBind: dba.plotBox(x, th = 0.05, pars = list(cex.axis=1.5, cex.main=2, cex.lab=1.5)) I have also tried the following variation: dba.plotBox(x, th = 0.05, cex.axis=1.5, cex.main=2, cex.lab=1.5) I assumed that these arguments would b ... written 9 weeks ago by Mthabisi Moyo10 1 342 views 1 ... I missed that, thanks for the catch! Deleting the .AnnotationHub folder cleared the cache and did the trick. I ran ensembldb using both the EnsDb I made from the Ensembl GRCh38 .gtf annotation file (v90) and the one I downloaded through AnnotationHub and both worked well. The only point I would mak ... written 11 months ago by Mthabisi Moyo10 1 342 views 1 ... I deleted the .AnnotationHub folder in /Library/Frameworks/R.framework/Versions/3.4/Resources/library/AnnotationHub and the result is unchanged. I still get 0 queries after reinstalling AnnotationHub database using Biocinstaller. Mthabisi   ... written 11 months ago by Mthabisi Moyo10 1 342 views 1 ... Thanks for the feedback! 1) I was able to construct the package using Ensembl 90 without a problem: >gtffile <- "/user/Downloads/Homo_sapiens.GRCh38.90.gtf.gz" > DB <- ensDbFromGtf(gtffile) Importing GTF file ... OK Processing metadata ... OK Processing genes ... Attribute availabi ... written 11 months ago by Mthabisi Moyo10 1 342 views 1 ... I am getting an error when I try and create an ensDb object using a GENCODE .gtf annotation file for GRCh38 downloaded directly from the GENCODE website. > gtffile <- "/user/Downloads/gencode.v26.annotation.gtf" > DB <- ensDbFromGtf(gtffile) Error in colnames<-(*tmp*, value = c ... written 11 months ago by Mthabisi Moyo10 • updated 11 months ago by Johannes Rainer1.3k 0 634 views 0 ... Hi Alejandro, It turns out it was likely the GENCODE gtf file. Switched to the mouse annotation file on Ensembl and it worked well. It is still odd to me considering the count files were created using the GENCODE gtf so I thought it would still result with the same number of lines but I guess not. ... written 21 months ago by Mthabisi Moyo10 0 634 views 0 ... I am trying to use DEXSeq to analyze a mouse paired-end stranded RNA-seq data (poly-A RNA capture). I Aligned the reads using STAR and obtained the reference genome and annotation file (which I collapsed with dexseq_prepare_annotation.py) from GENCODE (release m11). I used HTSeq to count exons (usin ... written 21 months ago by Mthabisi Moyo10 #### Latest awards to Mthabisi Moyo No awards yet. Soon to come :-) Content Help Access Use of this site constitutes acceptance of our User Agreement and Privacy Policy.