Archive for November, 2006

TeX capacity exceeded error

I am definitely dumb. Well, LaTeX has its part in it, too.

It turns out that all of a sudden, I started having this error when compiling a .tex file:

! TeX capacity exceeded, sorry [input stack size=1500].

After googling for an answer, I found out that the “stack size” limit is defined in the following file:

/usr/share/texmf/web2c/texmf.cnf

However, changing the value made no good: any limit, no matter how large, would be “exceeded”. The reason (after a little more hitting my head against the wall) is that there was an infinite loop in the input .tex (maybe \input{file.tex} inside file.tex, or somesuch). 10 hours (well, 5 minutes, actually) of head-banging later, when I was pretty sure no freaking infinite loop was there, I found the answer:

I had deleted the \end{document} tag!!

Now, yes, how stupid am I? And… how stupid is LaTeX to give that silly error, instead of:

TeX warning: You are too dumb, and forgot an \end{document}

Comments (43)

Custom style in PowerDot

Rembember I mentioned PowerDot for LaTeX? PowerDot is a LaTeX class to produce PowerPoint-like presentations. It creates PDFs that can be read fullscreen with any PDF reader, and turn out to be very nice looking presentations.

I am now fiddling with it, and wanted to do a custom style. I have read the PowerDot Manual[PDF], and it says all you have to do is to copy and rename an existing style, then modify it:

% cd /usr/share/texmf-texlive/tex/latex/powerdot/
% cp powerdot-default.sty powerdot-isilanes.sty
% vi powerdot-isilanes.sty

Then, put style=isilanes in your .tex, et voilà!. Well, it fails misserably, saying (among the usual garbage):

! Class powerdot Error: unknown style `isilanes'.

But the .sty is there!

OK, the problem is that LaTeX “doesn’t know” you added the style. To remind it, in my Debian Etch box:

% dpkg-reconfigure tetex-base

or, much better (thanks to a comment by bjacquem):

% texhash

This seems to “refresh” the internal LaTeX database, and now it works.

Comments

El laberinto del fauno

Ayer ví El laberinto del fauno, de Guillermo del Toro (IMDb|FilmAffinity).

Me habían dicho que era un poco gore, y que podía impresionar al espectador… pero la verdad es que no es para tanto. Hay algunas escenas truculentas, pero tampoco nada exagerado. Por otro lado también me habían dicho que era muy buena… y en eso no han fallado. No es la mejor película que he visto, e incluso quizá sean exagerados los comentarios de que es candidata firme al Oscar a la mejor película extranjera… pero está muy bien, hay que admitirlo.

Los actores están soberbios, sobre todo Sergi López en su papel de capitán franquista. La puesta en escena está muy cuidada, con gran realismo en escenarios y guión, al menos en la parte “realista” de la peli. Hay, entretejida con la parte “real” una parte de fantasía, en la que en principio el espectador no sabe si tal fantasía es también real, o solo imaginada por la niña protagonista.

Hay algún agujero suelto por ahí, pero en general está hecha con mucho esmero, y el resultado es 100% recomendable de ver.

Comments

¿Popularización de Software Libre acentúa explotación de bugs?

[This entry is also available in English|English PDF|PDF en castellano]

Los excépticos del movimiento FLOSS suelen decir que el Software Libre tiene, en general, menos bugs explotados que el software privativo, solamente porque es menos popular que este. Argumentan que, como el FLOSS tiene menos usuarios, los crackers estarán menos interesados en malgastar su tiempo intentando explotar los bugs que pudiera tener. La mayor base de usuarios del software privativo daría además, según ellos, una mayor publicidad a sus bugs, y sus modos de explotación se difundirían más rápido. El corolario a esta teoría sería que la popularización de aplicaciones FLOSS (p.e. Firefox), llevería a un incremento en el número de bugs descubiertos y explotados, llegando eventualmente a un estado similar al del software privativo actual (p.e. “Cuando Firefox sea tan popular como Internet Explorer, tendrá tantos bugs como Internet Explorer.”).

El objetivo de este artículo es demostrar matemáticamente la total insensatez de tal teoría. Específicamente, argumentaré que un aumento en el tamaño de la comunidad de un proyecto FLOSS lo mejora de al menos 3 formas:

  1. Desarrollo acelerado
  2. Menor vida media de bugs abiertos
  3. Menor vida media de bugs explotados

Una explicación más extensa está disponible en formato PDF. Lamento que las fórmulas matemáticas en HTML sean de calidad francamente pobre, pero no tengo ni tiempo ni habilidad para mejorarlas. Si el lector está interesado en fórmulas bonitas, recomiendo acudir al PDF.

Este blog, y el PDF que enlazo, están liberados bajo la siguiente licencia:


Creative Commons License

Esta obra está bajo una licencia de Creative Commons.

Lo que esto significa, básicamente, es que eres libre para copiar y/o modificar este trabajo como gustes, y redistribuirlo cuanto quieras, con dos únicas limitaciones: que no le des uso comercial, y que cites a su autor (o al menos enlaces a este blog).

1 – Proposiciones y derivación

Tenemos un proyecto FLOSS P, nuevas versiones del cual se liberan cada T tiempo. Cada versión se asume que incorpora G nuevos bugs, y las sucesivas versiones serán liberadas cuando todos los bugs de la anterior sean parcheados. En un momento determinado habrá B bugs abiertos (de los G originales).

Asumo que la velocidad de parcheo es proporcional al tamaño de la comunidad de usuarios (U):

dB/dt=-KpU

1.1 – Desarrollo acelerado

De arriba, la dependencia temporal del número de bugs abiertos:

B = G – Kp U t

El tiempo entre versiones (T), de B = 0:

T = G/KpU

De manera que el tiempo entre versiones se acorta para U creciente.

1.2 – Menor vida media de bugs abiertos

En un período de tiempo dt, se parchean (-dB/dt)dt bugs, siendo su edad t. Si llamamos τ a la vida media de los bugs, tenemos la definición:

τ = (∫t(-dB/dt)dt)/(∫dB)

De ahí se deduce:

Ï„ = T/2

Esto es: la vida media de los bugs es siempre la mitad del tiempo entre versiones, el cual (como se ha mencionado) tiene una proporcionalidad inversa con U.

1.3 – Fracción de bugs explotados antes de ser parcheados

Definimos la siguiente velocidad de explotación de bugs, donde Bx es el total de bugs explotados, Kx es la “eficiencia de explotación” de los crackers (cuya cantidad se asume proporcional a U), y Bou es la cantidad de bugs abiertos y sin explotar:

dBx/dt = Kx U Bou

También definimos α = Box/B, donde Box es la cantidad de bugs abiertos y explotados.

Se puede derivar la evolución temporal de α:

α(t) = 1 – exp(-Kx U t)

Tras ello definimos γ = Kp/KxG, y derivamos la fracción de los bugs G que terminan siendo explotados para un tiempo t:

Bx(t)/G = 1 – γ + (γ – 1 + t/T)exp(-Kx U t)

Resolviendo para t=T, y tomando en cuenta que T=G/KpU, obtenemos la fracción de los bugs totales que son explotados en algún momento durante el período entre versiones (Fx):

Fx = 1 – γ + γ exp(-1/γ)

Nótese que Fx es independiente de U, esto es, aunque aumente el tamaño de la comunidad de usuarios, no aumenta la fracción del total de bugs que acaban siendo explotados (aunque un crecimiento de la comunidad traiga asociado un aumento colateral del número de crackers).

1.4 – Menor vida media de bugs explotados

Deseamos saber cuánto tiempo permanecen sin parchear los bug explotados, y llamamos a este tiempo τx. Tras un desarrollo ligeramente complejo, pero derivando siempre de equaciones previamente definidas (ver la versión PDF), obtenemos una expresión realmente simple para τx:

τx = Fxτ

Esto es, el tiempo medio de explotación de los bugs explotados es proporcional a τ, el cual a su vez es proporcional a T, o inversamente proporcional a U.

2 – Conclusiones

El eslogan “Más popularidad = más bugs” es un non sequitur. De acuerdo con el simple modelo que bosquejo aquí, cuanto más amplia sea la comunidad de usuarios de un programa FLOSS, más rápido serán parcheados los bugs, incluso admitiendo que a más usuarios, más crackers entre ellos, dispuestos a acabar con él. Incluso para crackers que sean más efectivos en su desempeño que los usarios de buena fe en su trabajo de parcheo (Kx >>> Kp), aumentar el tamaño de la comunidad reduce el tiempo que los bugs permanecen abiertos y también cuánto tiempo tardan en parchearse los bugs ya explotados. No importa cuán torpes sean los usuarios, y cuán rapaces los crackers, el modelo libre (por medio del cual se da acceso al código a los usuarios, dándoles así poder para contribuir al programa) asegura que la popularización es positiva, tanto para el programa como para la propia comunidad.

Comparemos esto con un modelo cerrado, en el que una base de usuarios mayor puede incrementar el número de crackers atacando a un programa, pero ciertamente añade poco o nada a la velocidad con que el código es parcheado y corregido. Es de hecho el software privativo el que debe temer su popularización. Es fácil de ver que cuando una pieza determinada de software privativo alcanza una cierta “masa crítica” de usuarios, los crackers pueden potencialmente desmoronar su evolución (digamos, haciendo Ï„x = T, Fx = 1), porque (a diferencia del FLOSS), G, P, y por lo tanto T, son constantes (ya que dependen únicamente de los vendedores del software).

Comments (1)

Popularity of Free Software generating bug exploitation?

[Esta entrada está también disponible en castellano|English PDF|PDF en castellano]

It is often said (by FLOSS-skeptics), that Free Software has less exploited bugs than the Proprietary Software because it is less popular. They argue that, since less people uses FLOSS, the crackers are less inclined to waste their time exploiting the bugs it could have. The greater user base of the proprietary software would also, in their words, make bugs more prominent, and their exploits spread faster. The corollary of this theory would be that popularization of FLOSS applications (e.g. Firefox), would lead to an increase in the number of bugs discovered and exploited, eventually reaching a proprietary-like state (e.g. “Firefox will have as many bugs as IE, when Firefox is as popular as IE”).

In this blog entry I will try to outline a mathematical model, proposed to demonstrate the utter nonsense of this theory. Specifically, I will argue that an increase in community size benefits a FLOSS project in at least 3 ways:

  1. Faster development
  2. Shorter average life of open bugs
  3. Shorter average life of exploited bugs

A more thorough explanation is available in PDF format. Recall that the math display in HTML is generally poor (much more so when I don’t have the time nor skills to tune it). If you like pretty formulas, the PDF is for you.

Both this blog entry and the linked PDF are released under the following license:

Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.5 License.

What this basically means is that you are free to copy and/or modify this work, and redistribute it freely. The only limitations are that you can not make it for profit, and that you have to cite its original author (or at least link to this blog).

1 – Propositions and derivations

We have a FLOSS project P, new versions being released every T time, and each version incorporating G new bugs. Each new version will be released when all bugs from the previous release have been patched. At any point in time, there will be B open bugs (remaining from G).

The patching speed is assumed proportional to the size of the comunity of users (U):

dB/dt=-KpU

1.1 – Faster development

From above, the time dependency of open bugs:

B = G – Kp U t

The inter-release period (T), from B = 0:

T = G/KpU

So the inter-release time (T) is shortened for growing U.

1.2 – Shorter average life of open bugs

In a dt time period, (-dB/dt)dt bugs are patched, their age being t. If we call Ï„ the average lifetime of bugs, we have the definition:

τ = (∫t(-dB/dt)dt)/(∫dB)

From that it follows:

Ï„ = T/2

So, the average life of open bugs equals half the inter-release time, which (as stated above) has an inverse proportionality with U.

1.3 – Fraction of bugs exploited before being patched

We define the following bug exploitation speed, where Bx is the total amount of exploited bugs, Kx is the “exploiting efficiency” of the crackers (whose amount will be proportional to U), and Bou is the amount of open and unexploited bugs:

dBx/dt = Kx U Bou

We also define α = Box/B, where Box is the amount of open and exploited bugs.

It can be derived the evolution of α with time:

α(t) = 1 – exp(-Kx U t)

We then define γ = Kp/KxG, and derive the fraction of G bugs that end up exploited by time t:

Bx(t)/G = 1 – γ + (γ – 1 + t/T)exp(-Kx U t)

Solving for t=T, and taking into account that T=G/KpU, we get the fraction of total bugs that gets exploited ever, during the inter-release period (Fx):

Fx = 1 – γ + γ exp(-1/γ)

Recall that Fx is independent of U, that is, increasing the size of the user community does not increase the fraction of total bugs that get exploited ever, even though the amount of crackers is increased along with the user base.

1.4 – Shorter average life of exploited bugs

We want to find out how long exploited bugs stay unpatched, calling this time Ï„x. After some slightly complex algebra, but always deriving from the previously defined equations (see PDF version), we obtain a fairly simple expresion for Ï„x:

τx = Fxτ

That is, the average exploitation time of exploited bugs is proportional to Ï„, which is to say it is proportional to T, or inversely proportional to U.

2 – Conclusions

The “Increasing popularity = Increasing bugginess” motto is a non sequitur. According to the simple model outlined here, the broader the user community of a FLOSS program, the faster bugs will be patched, even admitting that an increase in user base brings an equal increase in the number of crackers committed to doom it. Even for crackers that are more effective in their cracking work than the bona fide users in their patching work (Kx >>> Kp), increasing the community size does reduce how long the bugs stay unpatched and also how long the exploited bugs stay unpatched. No matter how clumsy the users, and how rapacious the crackers, the free model (whereby the users are granted access to the code, and thus empowered to contribute to the program), ensures that popularization is positive, for both the program and the community itself.

Compare that with a closed model, in which an increased user base may boost the number or crackers attacking a program, but certainly adds little, if anything, to the code patching and correcting speed. It is actually proprietary software that should fear popularization. It is easy to see that when a particular proprietary software piece grows over a certain “critical mass” of users, the crackers could potentially disrupt its evolution (say, Ï„x = T, Fx = 1), because G, P and thus T, are kept constant (depend only on the sellers of the code).

Comments (1)

An inconvenient truth

95% screaming and 5% crying. That’s what I felt when watching An incovenient truth (es: Una verdad incómoda) (IMDb|FilmAffinity).

Crying because some problems are huge, and are real. And it is sad. Screaming because there are so many sons of a bitch trying to get away with their dirty business, disguised as “skeptics” of Global Warming.

The movie is a kind of documentary about Global Warming, featuring Al Gore as conductor. Good ol’ Al may ring a bell to you: yes he’s the one who came on top of George Bush in the 2000 US Presidential Elections, but nonetheless was declared loser due to highly controversial decisions by the US Supreme Court.

The movie is tagged by some people as “boring” or “not really saying anything new”. I disagree. I am no GW expert, but by no means illiterate, either. I have a B.Sc. in Chemistry, soon to get the Ph.D. (albeit in Quantum Chemistry, which has little direct link with enviromental matters), and I did find the movie interesting.

Some of the data presented is, of course, redundant to some extent, but nonetheless it seems appropriate to mention it once again, even if “we all know it”. However, there is a non-trivial amount of data that, at least for me, was new. To name three such points:

  • Of over 900 scientific articles studied, NOT ONE cast any doubt about the human source of the excess CO2 in the atmosphere, or this excess being the culprit of the global warming. There is no controversy among scientist, but rather a complete agreement. On the other hand, a surprising 53% of the studied mass media (newspapers,…) did mention some “disagreements” or “doubts”, making it look like we don’t really know what causes the global warming, or if it even exists! Clearly someone is trying to intoxicate the general public with doubts, when there’s none. Think of it.
  • The CO2/temperature increase in the last decades is alleged by some to be just “a part of a trend of ups and downs”, because “temperatures have always fluctuated”. Bullshit. Gore shows studies of deep Antarctic ice that go back a friggin’ 650,000 years, and plots of atmospheric CO2 concentration and temperature along these years. There are fluctuations, and even several glaciations could be seen. However, the CO2 concentration and temperature deviation from the average is now more than double what has ever been! Such a trend has never ever happened before.
  • Underdeveloped countries have loose environmental policies, that make them pollute a lot. Developed countries pollute more just because they have more industry, even if it is relatively cleaner. Not completely true. For example, the environmental policies regarding car manufacturing are tighter in China than in the USA. In fact, Chinese cars can be sold in the USA, but most north-american cars can not be sold in China, because they pollute too much. And that China being a book example of reckless industrialization, with little care for environment!



Image (taken from the Wikimedia), showing date much like the one Gore presents in the movie. Notice the point to the extreme right: the CO2 concentration goes over the roof (380 ppm), and is not shown.

Go watch the movie, it’s quite informing (and no, not too boring).

Comments (1)

Article in Science

I have just read a rather interesting article in Science about the economics of information security (R. Anderson and T. Moore, Science, 2006, 314, 610), and I would like to comment some quotes of it:

There has been a vigorous debate between software vendors and security researchers over whether actively seeking and disclosing vulnerabilities is socially desirable. Rescorla has argued that for software with many latent vulnerabilities (e.g. Windows), removing one bug makes little difference to the likelihood of an attacker finding another one later[1].

Quite interesting! First, even a paper on Science not only regards Windows as a piece of software with a virtually endless reservoir of internal errors, but it even uses it as a paradigmatic example of such a case. Second, it deems such software as not worth patching, and bugs not worth being disclosed (security through obscurity), because they are so many.

[…] [Rescorla] argued against disclosure and frequent patching unless the same vulnerabilities are likely to be rediscovered later. Ozment found that for FreeBSD[2] […] vulnerabilities are indeed likely to be rediscovered[3]. Ozment and Schecher also found that the rate at which unique vulnerabilities were disclosed for the core and unchanged FreeBSD operating system has decreased over a 6-year period[4]. These findings suggest that vulnerability disclosure can improve system security over the long term.

I have read [1] and [3] very briefly, and Ozment seems very critical of Rescorla’s results. However, the comparison between Windows and FreeBSD (I think they mean OpenBSD), which is FLOSS, is quite nice. Windows is so buggy that patching it is hopeless. FreeBSD has seen a decline in the number of disclosed bugs (remember that, being FLOSS, all the bugs found by developers, mantainers and users are disclosed), related to the fact that each bug fixed actually means a reduced probability of finding new bugs (because the total is not endless).

The bottom line is that, for a good piece of software (one that is not so bug-ridden that crackers never “rediscover” an old bug, because there are sooo many new ones to discover), disclosing the bugs is better. It is so because it speeds the patching rate, which in turn reduces the amount of exploitable bugs, which in turn improves the security. The connection between patching bugs and reducing significantly the amount of exploitable bugs can be made when the amount of bugs is small enough that new crackers are likely to rediscover old bugs, and then it would have paid to patch those bugs. Notice also that this is an auto-catalytic (self-accelerated) process: the more bugs disclosed, and more bugs patched, the less bugs remain, so the more it pays to further disclose and patch the remaining bugs, because the less bugs, the relatively more it pays to patch.

Vulnerability disclosure also helps to give vendors an incentive to fix bugs in subsequent product releases[5]. Arora et al. have shown through quantitative analysis that public disclosure made vendors respond with fixes more quickly; the number of attacks increased, but the number of reported vulnerabilities declined over time[6]

Good point! Not only disclosing the bugs is good for the consumers because it directly increases its quality, but also because it helps enforce a better behavior of the vendors. This is a key idea in the article, which delves in the fact that security policies are best when the one enforcing them is the one suffering from their errors. However, nowadays there is little pressure on the vendors to produce more secure software, because the buyer has little knowledge to judge this aspect of the quality, and ends up favoring a product for its looks or the alleged features, regardless of stability or security. Disclosing the bugs helps the buyer to assess the security of a program, thus making a better-balanced choice when buying. This, in return, leads to a more secure software in general, because vendors will have a big incentive to make their products more secure (which they don’t really have now).

[1] E. Rescorla, paper presented in the Third Workshop on the Economics of Information Security, Minneapolis, 13 to 14 May 2004 (PDF)
[2] I suspect the authors are mistaking OpenBSD for FreeBSD
[3] A. Ozment, paper presented at the Fourth Workshop on the Economics of Information Security, Cambridge, MA, 2 to 3 June 2005 (PDF)
[4] A. Ozment, S.E. Schechter, paper presented at the 15th USENIX Security Symposium, Vancouver, 31 July to 4 August 2006 (HTML).
[5] A. Arora, R. Telang, H. Xu, paper presented at the Third Workshop on the Economics of Information Security, Minneapolis, 13 to 14 May 2004 (PDF)
[6] A. Arora, R. Krishnan, A. Nandkumar, R. Telang, Y. Yang, paper presented at the Third Workshop on the Economics of Information Security, Minneapolis, 13 to 14 May 2004 (PDF)

Comments

Children of Men

I have watched Children of Men (FilmAffinity|IMDb), and I liked it very much.

The sinopsis is simple: in a near future (year 2027) the humankind has long lost the hability to procreate, so that the youngest persons on Earth are over 18. All nations have crashed, except Great Britain, were an oppressive social setup, close to a fascist regime, remains as the last stronghold of “civilization”. Needless to say, inmigration presure is brutal, as are counter-inmigration measures.

This scenario, and the story that is told, feels at first a bit unrealistic. There are a lot of details that make little sense, or one would think that can not happen. However, as the movie advances, one gets the scary feeling that it could happen. Suddenly, interpersonal relationships, politics, economics… don’t seem a bit “sci-fi”, but rather, one starts to fear them, for their realism.

I would not like a future like that, but the most frightening thing is that it is one of the most verisimilar cataclismic futures in science fiction movies I’ve seen.

Except for one thing: nowadays (much less in 20 years’ time) it is not to fear an eventual extintion of mankind just because men and/or women are infertile, since artificial procreation means are available, even clonation if need be. There would be hard times, and humankind would not be the same… but it would surely survive.

Comments

Aclarado ancestral misterio

Acabo de dar respuesta a la antigua pregunta de:

¿Qué pesa más, un kilo de hierro o un kilo de paja?

La respuesta es clara: un kilo de hierro. No, no estoy de coña. Es así.

Si la pregunta fuera “¿Qué tiene más masa[…]”, obviamente la respuesta es que tienen la misma masa (mil gramos). Ahora bien, nos preguntan por el peso. El peso de un objeto es la fuerza con la que un objeto es atraido hacia el centro de la Tierra (“hacia abajo”). En general, se entiende que esta fuerza corresponde al producto de la masa por la aceleración gravitatoria, y por lo tanto es directamente proporcional a la masa (y por tanto hierro y paja pesarían lo mismo).

Pero no es completamente cierto. Tanto el kilo de paja como el de hierro son objetos inmersos en un fluido: la atmósfera. Dado que el un fluido, el gas atmosférico (el aire) ejerce una fuerza de flotación (por el que un globo de helio asciende, por ejemplo), dado que esta fuerza se contrapone a la de la gravedad, y dado que es proporcional al volumen (el cual es mucho mayor en el caso de la paja), hemos de concluir que el kilo de paja es atraido hacia el centro de la Tierra con una fuerza menor, y por lo tanto pesa menos.

Esto es tan físico y real como pueda serlo cualquier otra cosa: si pones en los dos platos de una balanza simple un kilo de paja y un kilo de hierro, caerá del lado del hierro (si tiene suficiente sensibilidad), una balanza romana suficiente precisa dará el hierro como más pesado, y si cargamos con el kilo de paja por un lodazal, nuestras huellas se hundirán menos en el barro que si lo hacemos con el kilo de hierro. Sea como sea que lo medimos, el kilo de hierro pesa (ligeramente) más.

Comments

CSI and false dichotomies

Yesterday I watched a CSI: Miami chapter where Erik Delko was accused of smoking marijuana.




South Beach, Miami. Full of marijuana smokers as Delko, most surely. Taken from Wikimedia Commons

What I want to comment on is the short interview of an Internal Affairs officer to Delko’s workmate Ryan Wolfe. The aim of the interview was to find evidence of Delko’s drug comsumption, and it went like this (loosely transcripted):

Officer: – Have you seen Delko consuming marijuana, or any other drug?
Wolfe: – No.
O: – Have you seen Delko in posession of marijuana, or any other drug?
W: – No.
O: – Did you see Delko with any drug-related paraphernalia?
W: – Nothing illegal…
O: – Then, what?
W: – Only cigarrette-paper.
O: – What do you think the paper was for?
W: – Maybe smoking tobacco.
O: – Have you ever seen Delko smoking tobacco?
W: – Never.
O: – Then, the paper was not for smoking tobacco! (clearly implying that it was for smoking marijuana)

Wow! Amazing this guy’s logic!

First, he makes use of a loaded question: asks about Delko smoking tobacco, making the negative answer (that he expects) sound like the assumption that he does smoke marijuana.

Second, he makes an argument from ignorance: since Wolfe has not seen Delko smoking tobacco, Delko does not smoke tobacco.

Third, in doing this he commits false dichotomy: the only uses of cigarrette-paper are not smoking either tobacco or marijuana, and the denial of one option does not make the other true. Inferring that, because we don’t know any other use for that paper, there must be only those two uses would be another argument from ignorance.

Fourth, and most prominently, he is delivering an outrageous non sequitur: he implies that Wolfe not having seen Delko smoking tobacco is a proof of him not smoking tobacco, but Wolfe not having seen Delko smoking marijuana is not a proof of him not smoking marijuana.

Comments

« Previous entries Next Page » Next Page »