content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Help with Django app and payment systems (general queries) So I'm working on an app in Django, however this is my first time venturing into advance integration for a webapp with payment systems (I used to work with paypal/2checkout so it was pretty no-skill-required). My partners have chosen PaymentExpress, and there are several sets of API (all of which are pretty new to me) and they are as follows (http://www.paymentexpress.com/products/ecommerce/merchant_hosted.html) 1) PXPost 2) Software toolkit 3) Web Service I would like to pick the brains of the many experts in this area, on what these various APIs are useful for and their disadvantages. Of course, if there is a ready Django Pluggable/Snipplet that works with one of the above APIs above, I am open to exploring them too. Thanks in advance! A: PXPost is the most straight-forward solution. You just communicate via HTTP POSTs and XML. You don't need any external dependencies, just urllib2 and ElementTree. Software toolkit can be used only on Windows platform, so it's not an option for you(or is it?). COM is also a nasty beast. Web service is a more elegant PXPost. You won't need to build your own XML request, the SOAP protocol does that for you. It just downloads the WSDL where it's specified which methods web service exposes and generates Python module with web service's methods. You just then import the module and off you go. The problem is that it's not always easy to generate that Python module. If web service uses some custom data types it can get quite complicated. Check this for more. So, I'd try with web service approach first, if that fails go with PXPost.
Help with Django app and payment systems (general queries)
So I'm working on an app in Django, however this is my first time venturing into advance integration for a webapp with payment systems (I used to work with paypal/2checkout so it was pretty no-skill-required). My partners have chosen PaymentExpress, and there are several sets of API (all of which are pretty new to me) and they are as follows (http://www.paymentexpress.com/products/ecommerce/merchant_hosted.html) 1) PXPost 2) Software toolkit 3) Web Service I would like to pick the brains of the many experts in this area, on what these various APIs are useful for and their disadvantages. Of course, if there is a ready Django Pluggable/Snipplet that works with one of the above APIs above, I am open to exploring them too. Thanks in advance!
[ "PXPost is the most straight-forward solution. You just communicate via HTTP POSTs and XML. You don't need any external dependencies, just urllib2 and ElementTree. \nSoftware toolkit can be used only on Windows platform, so it's not an option for you(or is it?). COM is also a nasty beast.\nWeb service is a more elegant PXPost. You won't need to build your own XML request, the SOAP protocol does that for you. It just downloads the WSDL where it's specified which methods web service exposes and generates Python module with web service's methods. You just then import the module and off you go. The problem is that it's not always easy to generate that Python module. If web service uses some custom data types it can get quite complicated. Check this for more. \nSo, I'd try with web service approach first, if that fails go with PXPost.\n" ]
[ 0 ]
[]
[]
[ "django", "payment", "python" ]
stackoverflow_0000954478_django_payment_python.txt
Q: How to debug Google App Engine scripts with PyScripter The situation is as follows: I have downloaded the Google App Engine SDK. I have written my "helloworld" app that runs locally in my computer. I have to use PyScripter as IDE. I can't use Eclipse, that would not be a valid solution to my problem. In PyScripter, I have set a "Run Configuration", so that an instance of the server runs locally (either in "run" mode or in "debug" mode), and can access the app via a webbrowser accessing "localhost". Now, the problem is, breakpoints seem to be ignored. I set a breakpoint, reload the browser, and the response appears without the debugger stopping at the breakpoint I had set in my own function. I cannot debug at all. The question is, how can I debug the app using the configuration I have described? (Note: I am already using the "remote" python engine within PyScripter for running the local server) A: I think this is a PyScripter's bug. I tested in version 1.9.9.7 and the same problem is still there.
How to debug Google App Engine scripts with PyScripter
The situation is as follows: I have downloaded the Google App Engine SDK. I have written my "helloworld" app that runs locally in my computer. I have to use PyScripter as IDE. I can't use Eclipse, that would not be a valid solution to my problem. In PyScripter, I have set a "Run Configuration", so that an instance of the server runs locally (either in "run" mode or in "debug" mode), and can access the app via a webbrowser accessing "localhost". Now, the problem is, breakpoints seem to be ignored. I set a breakpoint, reload the browser, and the response appears without the debugger stopping at the breakpoint I had set in my own function. I cannot debug at all. The question is, how can I debug the app using the configuration I have described? (Note: I am already using the "remote" python engine within PyScripter for running the local server)
[ "I think this is a PyScripter's bug. I tested in version 1.9.9.7 and the same problem is still there. \n" ]
[ 2 ]
[]
[]
[ "debugging", "google_app_engine", "pyscripter", "python" ]
stackoverflow_0000789558_debugging_google_app_engine_pyscripter_python.txt
Q: By System command By using system command i want to open '.py' in the notepad. Ex assume i have "Fact.py" file. Now i want to write a program which will open this file in notepad and we can edit this file. A: It's best to use subprocess for this, since this will avoid having to deal with quoting files containing spaces etc for the shell. import subprocess subprocess.call(['notepad','Fact.py']) A: import os os.system("notepad.exe fact.py") should do it, assuming the Notepad program is in your system's path.
By System command
By using system command i want to open '.py' in the notepad. Ex assume i have "Fact.py" file. Now i want to write a program which will open this file in notepad and we can edit this file.
[ "It's best to use subprocess for this, since this will avoid having to deal with quoting files containing spaces etc for the shell.\nimport subprocess\nsubprocess.call(['notepad','Fact.py'])\n\n", "import os\n\nos.system(\"notepad.exe fact.py\")\n\nshould do it, assuming the Notepad program is in your system's path.\n" ]
[ 7, 1 ]
[]
[]
[ "python" ]
stackoverflow_0000954823_python.txt
Q: how to program functions with alternative return value signatures in python? (next() for alternative iterators) e.g. so that these would both work - is it possible? (val,VAL2) = func(args) val = func(args) Where val is not a tuple For example I'd like these to work for my custom object something for item in something: do_item(item) #where again item - is not a tuple for (item,key) in something: do_more(key,item) I thought that I need to implement next() function in two different ways... edit: as follows from the answers below, this should not really be done. A: If you mean, can the function act differently based on the return types the caller is expecting, the answer is no (bar seriously nasty bytecode inspection). In this case, you should provide two different iterators on your object, and write something like: for item in something: # Default iterator: returns non-tuple objects do_something(item) for (item,key) in something.iter_pairs(): # iter_pairs returns different iterator do_something_else(item, key) eg. see the dictionary object, which uses this pattern. for key in mydict iterates over the dictionary keys. for k,v in mydict.iteritems() iterates over (key, value) pairs. [Edit] Just in case anyone wants to see what I mean by "seriously nasty bytecode inspection", here's a quick implementation: import inspect, opcode def num_expected_results(): """Return the number of items the caller is expecting in a tuple. Returns None if a single value is expected, rather than a tuple. """ f = inspect.currentframe(2) code = map(ord, f.f_code.co_code) pos = f.f_lasti if code[pos] == opcode.opmap['GET_ITER']: pos += 1 # Skip this and the FOR_ITER if code[pos] > opcode.EXTENDED_ARG: pos +=5 elif code[pos] > opcode.HAVE_ARGUMENT: pos +=3 else: pos += 1 if code[pos] == opcode.opmap['UNPACK_SEQUENCE']: return code[pos+1] + (code[pos+2] << 8) return None Usable something like: class MagicDict(dict): def __iter__(self): if num_expected_results() == 2: for k,v in self.iteritems(): yield k,v else: for k in self.iterkeys(): yield k d=MagicDict(foo=1, bar=2) print "Keys:" for key in d: print " ", key print "Values" for k,v in d: print " ",k,v Disclaimer: This is incredibly hacky, insanely bad practice, and will cause other programmers to hunt you down and kill you if they ever see it in real code. Only works on cpython (if that). Never use this in production code (or for that matter, probably any code). A: Have you tried that? It works. def myfunction(data): datalen = len(data) result1 = data[:datalen/2] result2 = data[datalen/2:] return result1, result2 a, b = myfunction('stuff') print a print b c = myfunction('other stuff') print c In fact there is no such thing as "return signature". All functions return a single object. It seems that you are returning more than one, but in fact you wrap them into a container tuple object. A: Update: Given the example use case, I'd write different generators to handle the cases: class Something(object): def __init__(self): self.d = {'a' : 1, 'b' : 2, 'c' : 3} def items(self): for i in self.d.values(): yield i def items_keys(self): for k,i in self.d.items(): yield i,k something = Something() for item in something.items(): ....: print item ....: 1 3 2 for item,key in something.items_keys(): ....: print key, " : ", item ....: a : 1 b : 2 c : 3 Or You can return a tuple: In [1]: def func(n): ...: return (n, n+1) ...: In [2]: a,b = func(1) In [3]: a Out[3]: 1 In [4]: b Out[4]: 2 In [5]: x = func(1) In [6]: x Out[6]: (1, 2) A: Yes it's doable: def a(b): if b < 5: return ("o", "k") else: return "ko" and the result: >>> b = a(4) >>> b ('o', 'k') >>> b = a(6) >>> b 'ko' I think the thing after is to be careful when you will use the values returned... A: >>> def func(a,b): return (a,b) >>> x = func(1,2) >>> x (1, 2) >>> (y,z) = func(1,2) >>> y 1 >>> z 2 That doesn't really answer your question. The real answer is that the left side of the assignment doesn't affect the returned type of the function and can't be used to distinguish between functions with different return types. As noted in other answers, the function can return different types from different return statements but it doesn't know what's on the other side of the equals sign. In the case of this function, it returns a tuple. If you assign it to x, x has the value of the tuple. (y, z) on the left side of the assignment is "tuple unpacking". The tuple returned by func() is unpacked into y and z. A: It's possible only if you're happy for val to be a 2-item tuple (or if args need not be the same in the two cases). The former is what would happen if the function just ended with something like return 23, 45. Here's an example of the latter idea: def weirdfunc(how_many_returns): assert 1 <= how_many_returns <= 4 return 'fee fie foo fum'.split()[:how_many_returns] var1, var2 = weirdfunc(2) # var1 gets 'fee', var2 gets 'fie' var, = weirdfunc(1) # var gets 'fee' A: Yes, both would work. In the first example, val1 and val2 would have the two values. In the second example, val would have a tuple. You can try this in your python interpreter: >>> def foo(): ... return ( 1, 2 ) ... >>> x = foo() >>> (y,z) = foo() >>> x (1, 2) >>> y 1 >>> z 2 A: This is asking for major confusion. Instead you can follow dict with separate keys, values, items, etc. methods, or you can use a convention of naming unused variables with a single underscore. Examples: for k in mydict.keys(): pass for k, v in mydict.items(): pass for a, b in myobj.foo(): pass for a, _ in myobj.foo(): pass for _, b in myobj.foo(): pass for _, _, _, d in [("even", "multiple", "underscores", "works")]: print(d) for item in something: # or something.keys(), etc. do_item(item) for item, key in something.items(): do_more(key, item) If this doesn't fit your function, you should refactor it as two or more functions, because it's clearly trying to fulfill two or more different goals.
how to program functions with alternative return value signatures in python? (next() for alternative iterators)
e.g. so that these would both work - is it possible? (val,VAL2) = func(args) val = func(args) Where val is not a tuple For example I'd like these to work for my custom object something for item in something: do_item(item) #where again item - is not a tuple for (item,key) in something: do_more(key,item) I thought that I need to implement next() function in two different ways... edit: as follows from the answers below, this should not really be done.
[ "If you mean, can the function act differently based on the return types the caller is expecting, the answer is no (bar seriously nasty bytecode inspection). In this case, you should provide two different iterators on your object, and write something like:\nfor item in something: # Default iterator: returns non-tuple objects\n do_something(item)\n\nfor (item,key) in something.iter_pairs(): # iter_pairs returns different iterator\n do_something_else(item, key)\n\neg. see the dictionary object, which uses this pattern. for key in mydict iterates over the dictionary keys. for k,v in mydict.iteritems() iterates over (key, value) pairs.\n[Edit] Just in case anyone wants to see what I mean by \"seriously nasty bytecode inspection\", here's a quick implementation:\nimport inspect, opcode\n\ndef num_expected_results():\n \"\"\"Return the number of items the caller is expecting in a tuple.\n\n Returns None if a single value is expected, rather than a tuple.\n \"\"\"\n f = inspect.currentframe(2)\n code = map(ord, f.f_code.co_code)\n pos = f.f_lasti\n if code[pos] == opcode.opmap['GET_ITER']: pos += 1 # Skip this and the FOR_ITER\n if code[pos] > opcode.EXTENDED_ARG: pos +=5\n elif code[pos] > opcode.HAVE_ARGUMENT: pos +=3\n else: pos += 1\n if code[pos] == opcode.opmap['UNPACK_SEQUENCE']:\n return code[pos+1] + (code[pos+2] << 8)\n return None\n\nUsable something like:\nclass MagicDict(dict):\n def __iter__(self):\n if num_expected_results() == 2:\n for k,v in self.iteritems():\n yield k,v\n else:\n for k in self.iterkeys(): \n yield k\n\nd=MagicDict(foo=1, bar=2)\n\nprint \"Keys:\"\nfor key in d:\n print \" \", key\nprint \"Values\" \nfor k,v in d:\n print \" \",k,v\n\nDisclaimer: This is incredibly hacky, insanely bad practice, and will cause other programmers to hunt you down and kill you if they ever see it in real code. Only works on cpython (if that). Never use this in production code (or for that matter, probably any code).\n", "Have you tried that? It works.\ndef myfunction(data):\n datalen = len(data)\n result1 = data[:datalen/2]\n result2 = data[datalen/2:]\n return result1, result2\n\n\na, b = myfunction('stuff')\nprint a\nprint b\n\nc = myfunction('other stuff')\nprint c\n\nIn fact there is no such thing as \"return signature\". All functions return a single object. It seems that you are returning more than one, but in fact you wrap them into a container tuple object.\n", "Update:\nGiven the example use case, I'd write different generators to handle the cases:\nclass Something(object): \n def __init__(self): \n self.d = {'a' : 1, \n 'b' : 2, \n 'c' : 3} \n\n def items(self): \n for i in self.d.values(): \n yield i \n\n def items_keys(self): \n for k,i in self.d.items(): \n yield i,k \n\nsomething = Something()\n\nfor item in something.items():\n....: print item\n....: \n1\n3\n2\n\nfor item,key in something.items_keys():\n....: print key, \" : \", item\n....: \na : 1\nb : 2\nc : 3\n\nOr\nYou can return a tuple:\nIn [1]: def func(n):\n ...: return (n, n+1)\n ...: \n\nIn [2]: a,b = func(1)\n\nIn [3]: a\nOut[3]: 1\n\nIn [4]: b\nOut[4]: 2\n\nIn [5]: x = func(1)\n\nIn [6]: x\nOut[6]: (1, 2)\n\n", "Yes it's doable:\ndef a(b):\nif b < 5:\n return (\"o\", \"k\")\nelse:\n return \"ko\"\n\nand the result:\n>>> b = a(4)\n>>> b\n('o', 'k')\n>>> b = a(6)\n>>> b\n'ko'\n\nI think the thing after is to be careful when you will use the values returned...\n", ">>> def func(a,b):\n return (a,b)\n\n>>> x = func(1,2)\n>>> x\n(1, 2)\n>>> (y,z) = func(1,2)\n>>> y\n1\n>>> z\n2\n\nThat doesn't really answer your question. The real answer is that the left side of the assignment doesn't affect the returned type of the function and can't be used to distinguish between functions with different return types. As noted in other answers, the function can return different types from different return statements but it doesn't know what's on the other side of the equals sign. \nIn the case of this function, it returns a tuple. If you assign it to x, x has the value of the tuple. (y, z) on the left side of the assignment is \"tuple unpacking\". The tuple returned by func() is unpacked into y and z.\n", "It's possible only if you're happy for val to be a 2-item tuple (or if args need not be the same in the two cases). The former is what would happen if the function just ended with something like return 23, 45. Here's an example of the latter idea:\ndef weirdfunc(how_many_returns):\n assert 1 <= how_many_returns <= 4\n return 'fee fie foo fum'.split()[:how_many_returns]\n\nvar1, var2 = weirdfunc(2) # var1 gets 'fee', var2 gets 'fie'\n\nvar, = weirdfunc(1) # var gets 'fee'\n\n", "Yes, both would work. In the first example, val1 and val2 would have the two values. In the second example, val would have a tuple. You can try this in your python interpreter:\n>>> def foo():\n... return ( 1, 2 )\n...\n>>> x = foo()\n>>> (y,z) = foo()\n>>> x\n(1, 2)\n>>> y\n1\n>>> z\n2\n\n", "This is asking for major confusion. Instead you can follow dict with separate keys, values, items, etc. methods, or you can use a convention of naming unused variables with a single underscore. Examples:\nfor k in mydict.keys(): pass\nfor k, v in mydict.items(): pass\n\nfor a, b in myobj.foo(): pass\nfor a, _ in myobj.foo(): pass\nfor _, b in myobj.foo(): pass\n\nfor _, _, _, d in [(\"even\", \"multiple\", \"underscores\", \"works\")]:\n print(d)\n\nfor item in something: # or something.keys(), etc.\n do_item(item)\n\nfor item, key in something.items():\n do_more(key, item)\n\nIf this doesn't fit your function, you should refactor it as two or more functions, because it's clearly trying to fulfill two or more different goals.\n" ]
[ 7, 5, 3, 3, 3, 2, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0000953914_python.txt
Q: How can I tell if a certain key was pressed in Python? import sys print (sys.platform) print (2 ** 100) input('press Enter to exit') Suppose I wanted to use the number 1 as the button that must be pressed to exit. How would I go about doing this? A: Something like this? http://mail.python.org/pipermail/python-list/1999-October/014262.html Not so clean, but doable. A: If you're building a command line app, why not use one of the libraries that help you build one. For example: curses urwid. A: Something like this will do what you want: while(raw_input('Press "1" to exit.') != '1'): pass
How can I tell if a certain key was pressed in Python?
import sys print (sys.platform) print (2 ** 100) input('press Enter to exit') Suppose I wanted to use the number 1 as the button that must be pressed to exit. How would I go about doing this?
[ "Something like this?\nhttp://mail.python.org/pipermail/python-list/1999-October/014262.html\nNot so clean, but doable.\n", "If you're building a command line app, why not use one of the libraries that help you build one.\nFor example:\n\ncurses \nurwid.\n\n", "Something like this will do what you want:\nwhile(raw_input('Press \"1\" to exit.') != '1'):\n pass\n\n" ]
[ 2, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0000954933_python.txt
Q: How do I use a Python library in my Java application? What are the basic nuts and bolts of calling (running? interpreting? what can you do?) Python code from a Java program? Are there many ways to do it? A: You can embed Jython within your Java application, rather than spawning off a separate process. Provided your library is compatible with Jython, that would seem the most logical place to start. A: Apart from embedding Jython as mentioned by Brian, you have these options as well. Java 1.6 has inbuilt support for scripting. You can find more info here. Spring also provides excellent support for scripting. JRuby, Groovy are supported by Spring Scripting. You can find info here. A: And if none of the other alternatives mentioned (Jython, Spring) work, you can always run an external CPython interpreter and communicate with the JVM through: CORBA Sockets Pipes Temporary files Also maybe you would take a look at OpenOffice's UNO... I think it could be used outside the suite.
How do I use a Python library in my Java application?
What are the basic nuts and bolts of calling (running? interpreting? what can you do?) Python code from a Java program? Are there many ways to do it?
[ "You can embed Jython within your Java application, rather than spawning off a separate process. Provided your library is compatible with Jython, that would seem the most logical place to start.\n", "Apart from embedding Jython as mentioned by Brian, you have these options as well.\nJava 1.6 has inbuilt support for scripting.\nYou can find more info here.\nSpring also provides excellent support for scripting. JRuby, Groovy are supported by Spring Scripting. You can find info here.\n", "And if none of the other alternatives mentioned (Jython, Spring) work, you can always run an external CPython interpreter and communicate with the JVM through:\n\nCORBA\nSockets\nPipes\nTemporary files\n\nAlso maybe you would take a look at OpenOffice's UNO... I think it could be used outside the suite.\n" ]
[ 4, 2, 0 ]
[]
[]
[ "java", "jython", "python" ]
stackoverflow_0000954950_java_jython_python.txt
Q: Finding out which functions are available from a class instance in python? How do you dynamically find out which functions have been defined from an instance of a class? For example: class A(object): def methodA(self, intA=1): pass def methodB(self, strB): pass a = A() Ideally I want to find out that the instance 'a' has methodA and methodB, and which arguments they take? A: Have a look at the inspect module. >>> import inspect >>> inspect.getmembers(a) [('__class__', <class '__main__.A'>), ('__delattr__', <method-wrapper '__delattr__' of A object at 0xb77d48ac>), ('__dict__', {}), ('__doc__', None), ('__getattribute__', <method-wrapper '__getattribute__' of A object at 0xb77d48ac>), ('__hash__', <method-wrapper '__hash__' of A object at 0xb77d48ac>), ('__init__', <method-wrapper '__init__' of A object at 0xb77d48ac>), ('__module__', '__main__'), ('__new__', <built-in method __new__ of type object at 0x8146220>), ('__reduce__', <built-in method __reduce__ of A object at 0xb77d48ac>), ('__reduce_ex__', <built-in method __reduce_ex__ of A object at 0xb77d48ac>), ('__repr__', <method-wrapper '__repr__' of A object at 0xb77d48ac>), ('__setattr__', <method-wrapper '__setattr__' of A object at 0xb77d48ac>), ('__str__', <method-wrapper '__str__' of A object at 0xb77d48ac>), ('__weakref__', None), ('methodA', <bound method A.methodA of <__main__.A object at 0xb77d48ac>>), ('methodB', <bound method A.methodB of <__main__.A object at 0xb77d48ac>>)] >>> inspect.getargspec(a.methodA) (['self', 'intA'], None, None, (1,)) >>> inspect.getargspec(getattr(a, 'methodA')) (['self', 'intA'], None, None, (1,)) >>> print inspect.getargspec.__doc__ Get the names and default values of a function's arguments. A tuple of four things is returned: (args, varargs, varkw, defaults). 'args' is a list of the argument names (it may contain nested lists). 'varargs' and 'varkw' are the names of the * and ** arguments or None. 'defaults' is an n-tuple of the default values of the last n arguments. >>> print inspect.getmembers.__doc__ Return all members of an object as (name, value) pairs sorted by name. Optionally, only return members that satisfy a given predicate.
Finding out which functions are available from a class instance in python?
How do you dynamically find out which functions have been defined from an instance of a class? For example: class A(object): def methodA(self, intA=1): pass def methodB(self, strB): pass a = A() Ideally I want to find out that the instance 'a' has methodA and methodB, and which arguments they take?
[ "Have a look at the inspect module.\n>>> import inspect\n>>> inspect.getmembers(a)\n[('__class__', <class '__main__.A'>),\n ('__delattr__', <method-wrapper '__delattr__' of A object at 0xb77d48ac>),\n ('__dict__', {}),\n ('__doc__', None),\n ('__getattribute__',\n <method-wrapper '__getattribute__' of A object at 0xb77d48ac>),\n ('__hash__', <method-wrapper '__hash__' of A object at 0xb77d48ac>),\n ('__init__', <method-wrapper '__init__' of A object at 0xb77d48ac>),\n ('__module__', '__main__'),\n ('__new__', <built-in method __new__ of type object at 0x8146220>),\n ('__reduce__', <built-in method __reduce__ of A object at 0xb77d48ac>),\n ('__reduce_ex__', <built-in method __reduce_ex__ of A object at 0xb77d48ac>),\n ('__repr__', <method-wrapper '__repr__' of A object at 0xb77d48ac>),\n ('__setattr__', <method-wrapper '__setattr__' of A object at 0xb77d48ac>),\n ('__str__', <method-wrapper '__str__' of A object at 0xb77d48ac>),\n ('__weakref__', None),\n ('methodA', <bound method A.methodA of <__main__.A object at 0xb77d48ac>>),\n ('methodB', <bound method A.methodB of <__main__.A object at 0xb77d48ac>>)]\n>>> inspect.getargspec(a.methodA)\n(['self', 'intA'], None, None, (1,))\n>>> inspect.getargspec(getattr(a, 'methodA'))\n(['self', 'intA'], None, None, (1,))\n>>> print inspect.getargspec.__doc__\nGet the names and default values of a function's arguments.\n\n A tuple of four things is returned: (args, varargs, varkw, defaults).\n 'args' is a list of the argument names (it may contain nested lists).\n 'varargs' and 'varkw' are the names of the * and ** arguments or None.\n 'defaults' is an n-tuple of the default values of the last n arguments.\n>>> print inspect.getmembers.__doc__\nReturn all members of an object as (name, value) pairs sorted by name.\n Optionally, only return members that satisfy a given predicate.\n\n" ]
[ 15 ]
[]
[]
[ "introspection", "python" ]
stackoverflow_0000955533_introspection_python.txt
Q: PyS60 application not going full screen I am very new to PyS60. I was testing how to set an application to full screen mode but unfortunately, it doesn't work as expected. I tested the script on Nokia 6120 Classic. Here is what I did: appuifw.app.screen = 'full' What I get is a half screen of my application with a plain white colour below. What am I doing wrong? Thanks in advance. A: Make sure you define own functions for screen redraw and screen rotate callbacks. When you rotate the device, you have to manually rescale everything to fit the new screen size. Otherwise you might get that "half of screen" effect. canvas = img = None def cb_redraw(aRect=(0,0,0,0)): ''' Overwrite default screen redraw event handler ''' if img: canvas.blit(img) def cb_resize(aSize=(0,0,0,0)): ''' Overwrite default screen resize event handler ''' global img img = graphics.Image.new(canvas.size) appuifw.app.screen = 'full' canvas = appuifw.Canvas( resize_callback = cb_resize, redraw_callback = cb_redraw) appuifw.app.body = canvas A: If you haven't already, I would advise using the latest version of PyS60 from https://garage.maemo.org/frs/?group_id=854 and trying again. Do the other two screen modes work as they are supposed to?
PyS60 application not going full screen
I am very new to PyS60. I was testing how to set an application to full screen mode but unfortunately, it doesn't work as expected. I tested the script on Nokia 6120 Classic. Here is what I did: appuifw.app.screen = 'full' What I get is a half screen of my application with a plain white colour below. What am I doing wrong? Thanks in advance.
[ "Make sure you define own functions for screen redraw and screen rotate callbacks. When you rotate the device, you have to manually rescale everything to fit the new screen size. Otherwise you might get that \"half of screen\" effect.\n\n canvas = img = None\n\n def cb_redraw(aRect=(0,0,0,0)):\n ''' Overwrite default screen redraw event handler '''\n if img:\n canvas.blit(img)\n\n def cb_resize(aSize=(0,0,0,0)):\n ''' Overwrite default screen resize event handler '''\n global img\n img = graphics.Image.new(canvas.size)\n\n appuifw.app.screen = 'full'\n canvas = appuifw.Canvas(\n resize_callback = cb_resize,\n redraw_callback = cb_redraw)\n appuifw.app.body = canvas\n\n", "If you haven't already, I would advise using the latest version of PyS60 from https://garage.maemo.org/frs/?group_id=854 and trying again.\nDo the other two screen modes work as they are supposed to?\n" ]
[ 4, 0 ]
[]
[]
[ "pys60", "python", "symbian" ]
stackoverflow_0000954272_pys60_python_symbian.txt
Q: Create plugins for python standalone executables how to create a good plugin engine for standalone executables created with pyInstaller, py2exe or similar tools? I do not have experience with py2exe, but pyInstaller uses an import hook to import packages from it's compressed repository. Of course I am able to import dynamically another compressed repository created with pyInstaller and execute the code - this may be a simple plugin engine. Problems appears when the plugin (this what is imported dynamically) uses a library that is not present in original repository (never imported). This is because import hook is for the original application and searches for packages in original repository - not the one imported later (plugin package repository). Is there an easy way to solve this problem? Maybe there exist such engine? A: When compiling to exe, your going to have this issue. The only option I can think of to allow users access with thier plugins to use any python library is to include all libraries in the exe package. It's probably a good idea to limit supported libraries to a subset, and list it in your documentation. Up to you. I've only used py2exe. In py2exe you can specify libraries that were not found in the search in the setup.py file. Here's a sample: from distutils.core import setup import py2exe setup (name = "script2compile", console=['script2compile.pyw'], version = "1.4", author = "me", author_email="[email protected]", url="myurl.com", windows = [{ "script":"script2compile.pyw", "icon_resources":[(1,"./ICONS/app.ico")] # Icon file to use for display }], # put packages/libraries to include in the "packages" list options = {"py2exe":{"packages": [ "pickle", "csv", "Tkconstants", "Tkinter", "tkFileDialog", "pyexpat", "xml.dom.minidom", "win32pdh", "win32pdhutil", "win32api", "win32con", "subprocess", ]}} ) import win32pdh import win32pdhutil import win32api A: PyInstaller does have a plugin system for handling hidden imports, and ships with several of those already in. See the webpage (http://www.pyinstaller.org) which says: The main goal of PyInstaller is to be compatible with 3rd-party packages out-of-the-box. This means that, with PyInstaller, all the required tricks to make external packages work are already integrated within PyInstaller itself so that there is no user intervention required. You'll never be required to look for tricks in wikis and apply custom modification to your files or your setup scripts. Check our compatibility list of SupportedPackages.
Create plugins for python standalone executables
how to create a good plugin engine for standalone executables created with pyInstaller, py2exe or similar tools? I do not have experience with py2exe, but pyInstaller uses an import hook to import packages from it's compressed repository. Of course I am able to import dynamically another compressed repository created with pyInstaller and execute the code - this may be a simple plugin engine. Problems appears when the plugin (this what is imported dynamically) uses a library that is not present in original repository (never imported). This is because import hook is for the original application and searches for packages in original repository - not the one imported later (plugin package repository). Is there an easy way to solve this problem? Maybe there exist such engine?
[ "When compiling to exe, your going to have this issue.\nThe only option I can think of to allow users access with thier plugins to use any python library is to include all libraries in the exe package. \nIt's probably a good idea to limit supported libraries to a subset, and list it in your documentation. Up to you.\nI've only used py2exe.\nIn py2exe you can specify libraries that were not found in the search in the setup.py file.\nHere's a sample:\nfrom distutils.core import setup\nimport py2exe\n\nsetup (name = \"script2compile\",\n console=['script2compile.pyw'],\n version = \"1.4\",\n author = \"me\",\n author_email=\"[email protected]\",\n url=\"myurl.com\",\n windows = [{\n \"script\":\"script2compile.pyw\",\n \"icon_resources\":[(1,\"./ICONS/app.ico\")] # Icon file to use for display\n }],\n # put packages/libraries to include in the \"packages\" list\n options = {\"py2exe\":{\"packages\": [ \"pickle\",\n \"csv\",\n \"Tkconstants\",\n \"Tkinter\",\n \"tkFileDialog\",\n \"pyexpat\",\n \"xml.dom.minidom\",\n \"win32pdh\",\n \"win32pdhutil\",\n \"win32api\",\n \"win32con\",\n \"subprocess\", \n ]}} \n\n )\n\nimport win32pdh\nimport win32pdhutil\nimport win32api\n\n", "PyInstaller does have a plugin system for handling hidden imports, and ships with several of those already in. See the webpage (http://www.pyinstaller.org) which says:\n\nThe main goal of PyInstaller is to be compatible with 3rd-party packages out-of-the-box. This means that, with PyInstaller, all the required tricks to make external packages work are already integrated within PyInstaller itself so that there is no user intervention required. You'll never be required to look for tricks in wikis and apply custom modification to your files or your setup scripts. Check our compatibility list of SupportedPackages. \n\n" ]
[ 3, 1 ]
[]
[]
[ "plugins", "py2exe", "pyinstaller", "python" ]
stackoverflow_0000307338_plugins_py2exe_pyinstaller_python.txt
Q: Data Synchronization framework / algorithm for server<->device? I'm looking to implement data synchronization between servers and distributed clients. The data source on the server is mysql with django on top. The client can vary. Updates can take place on either client or server, and the connection between server and client is not reliable (eg. changes can be made on a disconnected cell phone, should get sync'd when the cell phone has a connection again). S. Lott suggests using a version control design pattern in this question, which makes sense. I'm wondering if there are any existing packages / implementations of this I can use. Or, should I directly make use of svn/git/etc? Are there other alternatives? There must be synchronization frameworks or detailed descriptions of algorithms out there, but I'm not having a lot of luck finding them. I'd appreciate if you point me in the right direction. A: Perhaps using plain old rsync is enough. A: AFAIK there isnt any generic solution to this mainly due to the diverse requirements for synchronization. In one of our earlier projects we implemented a Spring batching based sync mechanism which relies on last updated timestamp field on each of the tables (that take part in sync). I have heard about SyncML but dont have much experience with that. If you have a single server and multiple clients, you could think of a JMS based approach. The data is bundled and placed in Queues (or topics) and would be pulled by clients. In your case, since updates are bi-directional, you need to handle conflict detection as well. This brings additional complexities.
Data Synchronization framework / algorithm for server<->device?
I'm looking to implement data synchronization between servers and distributed clients. The data source on the server is mysql with django on top. The client can vary. Updates can take place on either client or server, and the connection between server and client is not reliable (eg. changes can be made on a disconnected cell phone, should get sync'd when the cell phone has a connection again). S. Lott suggests using a version control design pattern in this question, which makes sense. I'm wondering if there are any existing packages / implementations of this I can use. Or, should I directly make use of svn/git/etc? Are there other alternatives? There must be synchronization frameworks or detailed descriptions of algorithms out there, but I'm not having a lot of luck finding them. I'd appreciate if you point me in the right direction.
[ "Perhaps using plain old rsync is enough.\n", "AFAIK there isnt any generic solution to this mainly due to the diverse requirements for synchronization.\nIn one of our earlier projects we implemented a Spring batching based sync mechanism which relies on last updated timestamp field on each of the tables (that take part in sync).\nI have heard about SyncML but dont have much experience with that.\nIf you have a single server and multiple clients, you could think of a JMS based approach.\nThe data is bundled and placed in Queues (or topics) and would be pulled by clients.\nIn your case, since updates are bi-directional, you need to handle conflict detection as well. This brings additional complexities.\n" ]
[ 1, 1 ]
[]
[]
[ "django", "python", "synchronization" ]
stackoverflow_0000682951_django_python_synchronization.txt
Q: Iterating through large lists with potential conditions in Python I have large chunks of data, normally at around 2000+ entries, but in this report we have the ability to look as far as we want so it could be up to 10,000 records The report is split up into: Two categories and then within each Category, we split by Currency so we have several sub categories within the list. My issue comes in efficiently calculating the various subtotals. I am using Django and pass a templatetag the currency and category, if it applies, and then the templatetag renders the total. Note that sometimes I have a subtotal just for the category, with no currency passed. Initially, I was using a seperate query for each subtotal by just using .filter() if there was a currency/category like so: if currency: entries = entries.filter(item_currency=currency) This became a problem as I would have too many queries, and too long of a generation time (2,000+ ms), so I opted to use list(entries) to execute my query right off the bat, and then loop through it with simple list comprehensions: totals['quantity'] = sum([e.quantity for e in entries]) My problem if you don't see it yet, lies in .. how can I efficiently add the condition for currency / category on each list comprehension? Sometimes they won't be there, sometimes they will so I can't simply type: totals['quantity'] = sum([e.quantity for e in entries if item_currency = currency]) I could make a huge if-block, but that's not very clean and is a maintenance disaster, so I'm reaching out to the Stackoverflow community for a bit of insight .. thanks in advance :) A: You could define a little inline function: def EntryMatches(e): if use_currency and not (e.currency == currency): return False if use_category and not (e.category == category): return False return True then totals['quantity'] = sum([e.quantity for e in entries if EntryMatches(e)]) EntryMatches() will have access to all variables in enclosing scope, so no need to pass in any more arguments. You get the advantage that all of the logic for which entries to use is in one place, you still get to use the list comprehension to make the sum() more readable, but you can have arbitrary logic in EntryMatches() now.
Iterating through large lists with potential conditions in Python
I have large chunks of data, normally at around 2000+ entries, but in this report we have the ability to look as far as we want so it could be up to 10,000 records The report is split up into: Two categories and then within each Category, we split by Currency so we have several sub categories within the list. My issue comes in efficiently calculating the various subtotals. I am using Django and pass a templatetag the currency and category, if it applies, and then the templatetag renders the total. Note that sometimes I have a subtotal just for the category, with no currency passed. Initially, I was using a seperate query for each subtotal by just using .filter() if there was a currency/category like so: if currency: entries = entries.filter(item_currency=currency) This became a problem as I would have too many queries, and too long of a generation time (2,000+ ms), so I opted to use list(entries) to execute my query right off the bat, and then loop through it with simple list comprehensions: totals['quantity'] = sum([e.quantity for e in entries]) My problem if you don't see it yet, lies in .. how can I efficiently add the condition for currency / category on each list comprehension? Sometimes they won't be there, sometimes they will so I can't simply type: totals['quantity'] = sum([e.quantity for e in entries if item_currency = currency]) I could make a huge if-block, but that's not very clean and is a maintenance disaster, so I'm reaching out to the Stackoverflow community for a bit of insight .. thanks in advance :)
[ "You could define a little inline function:\ndef EntryMatches(e):\n if use_currency and not (e.currency == currency):\n return False\n if use_category and not (e.category == category):\n return False\n return True\n\nthen\ntotals['quantity'] = sum([e.quantity for e in entries if EntryMatches(e)])\n\nEntryMatches() will have access to all variables in enclosing scope, so no need to pass in any more arguments. You get the advantage that all of the logic for which entries to use is in one place, you still get to use the list comprehension to make the sum() more readable, but you can have arbitrary logic in EntryMatches() now.\n" ]
[ 6 ]
[]
[]
[ "django", "list", "python" ]
stackoverflow_0000956820_django_list_python.txt
Q: Am I missing step in building/installing VTK-5.4 with Python2.6 bindings on Ubuntu 9.04? I successfully built and installed VTK-5.4 with Python bindings from source. Yet, when I try to import VTK in python it gives the following Traceback error File "", line 1, in File "/usr/local/lib/python2.6/dist-packages/VTK-5.4.2-py2.6.egg/vtk/init.py", line 41, in from common import * File "/usr/local/lib/python2.6/dist-packages/VTK-5.4.2-py2.6.egg/vtk/common.py", line 7, in from libvtkCommonPython import * ImportError: libvtkCommonPythonD.so.5.4: cannot open shared object file: No such file or directory So I am wondering what I am missing? I have tried adding /usr/local/lib/vtk-5.4 to both PATH and PYTHONPATH environment variables and still get the same result. Any hints or suggestions? NOTE: libvtkCommonPythonD.so.5.4 exists in /usr/local/lib/vtk-5.4 as a symlink to libvtkCommonPythonD.so.5.4.2 A: Test if adding /usr/local/lib to your $LD_LIBRARY_PATH helps: In a shell: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib If it works, make it permanent by (adding /usr/local/lib to /etc/ld.so.conf) _ (running 'ldconfig -n /usr/local/lib')
Am I missing step in building/installing VTK-5.4 with Python2.6 bindings on Ubuntu 9.04?
I successfully built and installed VTK-5.4 with Python bindings from source. Yet, when I try to import VTK in python it gives the following Traceback error File "", line 1, in File "/usr/local/lib/python2.6/dist-packages/VTK-5.4.2-py2.6.egg/vtk/init.py", line 41, in from common import * File "/usr/local/lib/python2.6/dist-packages/VTK-5.4.2-py2.6.egg/vtk/common.py", line 7, in from libvtkCommonPython import * ImportError: libvtkCommonPythonD.so.5.4: cannot open shared object file: No such file or directory So I am wondering what I am missing? I have tried adding /usr/local/lib/vtk-5.4 to both PATH and PYTHONPATH environment variables and still get the same result. Any hints or suggestions? NOTE: libvtkCommonPythonD.so.5.4 exists in /usr/local/lib/vtk-5.4 as a symlink to libvtkCommonPythonD.so.5.4.2
[ "Test if adding /usr/local/lib to your $LD_LIBRARY_PATH helps:\nIn a shell:\nexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib\n\nIf it works, make it permanent by (adding /usr/local/lib to /etc/ld.so.conf) _ (running 'ldconfig -n /usr/local/lib')\n" ]
[ 5 ]
[]
[]
[ "3d", "python", "vtk" ]
stackoverflow_0000956716_3d_python_vtk.txt
Q: Bundling PyQwt with py2exe I have a standard setup script for py2exe with which I bundle PyQt-based applications into Windows .exe files. Today I tried a simple script that uses the PyQwt module, and it doesn't seem to work. py2exe runs alright, but when I execute the .exe it creates, it dumps the following into a log file and doesn't run: Traceback (most recent call last): File "qwt_test.pyw", line 5, in <module> File "zipextimporter.pyo", line 82, in load_module File "PyQt4\Qwt5\__init__.pyo", line 32, in <module> File "zipextimporter.pyo", line 98, in load_module ImportError: MemoryLoadLibrary failed loading PyQt4\Qwt5\Qwt.pyd When I look in PyQt4\Qwt5\ in the build\bdist.win32\winexe\collect-2.5 directory, Qwt.pyd is definitely there. I can't seem to find anything useful online regarding this error. What could cause it? Thanks. A: py2exe is not the only way, and maybe not the best way, to put together exe files for Python apps -- in particular, it hardly if at all supports pyqt. Please, I beseech you, check out PyInstaller, which DOES know about PyQt (and Linux, and Mac, should you care...) -- just make sure you use the SVN head checkout, not the "released" version, which at this time is seriously out of date (an issue that's hopefully going away soon...). A: Some options: Try playing with the py2xe bundle_files options (3, 2, 1) (especially if you put them all in one big library zip, some dlls don't like that). Make sure a copy of msvcp71.dll exists under windows\system32 or in the directory of your executable. Try excluding the dll explicitely (add Qwt.pyd to the dll_excludes option and (after building) copy Qwt.pyd (and _Qwt.pyd if it exists) to your executable path.
Bundling PyQwt with py2exe
I have a standard setup script for py2exe with which I bundle PyQt-based applications into Windows .exe files. Today I tried a simple script that uses the PyQwt module, and it doesn't seem to work. py2exe runs alright, but when I execute the .exe it creates, it dumps the following into a log file and doesn't run: Traceback (most recent call last): File "qwt_test.pyw", line 5, in <module> File "zipextimporter.pyo", line 82, in load_module File "PyQt4\Qwt5\__init__.pyo", line 32, in <module> File "zipextimporter.pyo", line 98, in load_module ImportError: MemoryLoadLibrary failed loading PyQt4\Qwt5\Qwt.pyd When I look in PyQt4\Qwt5\ in the build\bdist.win32\winexe\collect-2.5 directory, Qwt.pyd is definitely there. I can't seem to find anything useful online regarding this error. What could cause it? Thanks.
[ "py2exe is not the only way, and maybe not the best way, to put together exe files for Python apps -- in particular, it hardly if at all supports pyqt. Please, I beseech you, check out PyInstaller, which DOES know about PyQt (and Linux, and Mac, should you care...) -- just make sure you use the SVN head checkout, not the \"released\" version, which at this time is seriously out of date (an issue that's hopefully going away soon...).\n", "Some options:\n\nTry playing with the py2xe bundle_files options (3, 2, 1) (especially if you put them all in one big library zip, some dlls don't like that).\nMake sure a copy of msvcp71.dll exists under windows\\system32 or in the directory of your executable.\nTry excluding the dll explicitely (add Qwt.pyd to the dll_excludes option and (after building) copy Qwt.pyd (and _Qwt.pyd if it exists) to your executable path.\n\n" ]
[ 4, 1 ]
[]
[]
[ "py2exe", "pyqt", "python" ]
stackoverflow_0000899658_py2exe_pyqt_python.txt
Q: python web framework focusing on json-oriented web applications I'm looking for a python equivalent of ruby's halcyon - a framework focused on "web service"-type applications rather than html-page-oriented ones. Google brings up a lot of example code and experiments, but I couldn't find anything that people were using in production and hammering on. Failing that, what is the best web framework to use for this purpose? I'm looking for something small and lightweight, emphasising robustness and speed rather than features. Also, by speed I simply want a low latency overhead, not the ability to handle thousands of requests per second. A: Based on you comment, it sounds like one of the microframeworks may be what you're looking for. A: Why not use django? You can return a json with it, so it's not a problem. At the same time, you get good, well-tested framework...
python web framework focusing on json-oriented web applications
I'm looking for a python equivalent of ruby's halcyon - a framework focused on "web service"-type applications rather than html-page-oriented ones. Google brings up a lot of example code and experiments, but I couldn't find anything that people were using in production and hammering on. Failing that, what is the best web framework to use for this purpose? I'm looking for something small and lightweight, emphasising robustness and speed rather than features. Also, by speed I simply want a low latency overhead, not the ability to handle thousands of requests per second.
[ "Based on you comment, it sounds like one of the microframeworks may be what you're looking for.\n", "Why not use django? You can return a json with it, so it's not a problem. At the same time, you get good, well-tested framework... \n" ]
[ 4, 3 ]
[]
[]
[ "json", "python", "web_applications", "web_services" ]
stackoverflow_0000955751_json_python_web_applications_web_services.txt
Q: Getting column info in cx_oracle when table is empty? I am working on an a handler for the python logging module. That essentially logs to an oracle database. I am using cx_oracle, and something i don't know how to get is the column values when the table is empty. cursor.execute('select * from FOO') for row in cursor: # this is never executed because cursor has no rows print '%s\n' % row.description # This prints none row = cursor.fetchone() print str(row) row = cursor.fetchvars # prints useful info for each in row: print each The output is: None <cx_Oracle.DATETIME with value [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None , None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None , None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]> <cx_Oracle.STRING with value [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]> Now looking at the var data i can see the data types, and their sizes (count nones?) but thats missing the column name. How can i go about this? A: I think the description attribute may be what you are looking for. This returns a list of tuples that describe the columns of the data returned. It works quite happily if there are no rows returned, for example: >>> import cx_Oracle >>> c = cx_Oracle.connect("username", "password") >>> cr = c.cursor() >>> cr.execute("select * from dual where 1=0") <__builtin__.OracleCursor on <cx_Oracle.Connection to user username@local>> >>> cr.description [('DUMMY', <type 'cx_Oracle.STRING'>, 1, 1, 0, 0, 1)]
Getting column info in cx_oracle when table is empty?
I am working on an a handler for the python logging module. That essentially logs to an oracle database. I am using cx_oracle, and something i don't know how to get is the column values when the table is empty. cursor.execute('select * from FOO') for row in cursor: # this is never executed because cursor has no rows print '%s\n' % row.description # This prints none row = cursor.fetchone() print str(row) row = cursor.fetchvars # prints useful info for each in row: print each The output is: None <cx_Oracle.DATETIME with value [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None , None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None , None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]> <cx_Oracle.STRING with value [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]> Now looking at the var data i can see the data types, and their sizes (count nones?) but thats missing the column name. How can i go about this?
[ "I think the description attribute may be what you are looking for. This returns a list of tuples that describe the columns of the data returned. It works quite happily if there are no rows returned, for example:\n\n>>> import cx_Oracle\n>>> c = cx_Oracle.connect(\"username\", \"password\")\n>>> cr = c.cursor()\n>>> cr.execute(\"select * from dual where 1=0\")\n<__builtin__.OracleCursor on <cx_Oracle.Connection to user username@local>>\n>>> cr.description\n[('DUMMY', <type 'cx_Oracle.STRING'>, 1, 1, 0, 0, 1)]\n\n" ]
[ 14 ]
[]
[]
[ "cx_oracle", "python" ]
stackoverflow_0000956085_cx_oracle_python.txt
Q: retrieving current drive letters in windows (from python) Possible Duplicate: Is there a way to list all the available drive letters in python? What's the way to retrieve currently mounted drive list in Windows incl. Thumb drives, and mapped networks. i currently do drives=[] for c in string.lowercase: if os.path.isdir(c+':'): drives.append(c+':') which looks ugly, but is completely KISS proof :) A: win32api.GetLogicalDriveStrings().split("\x00")
retrieving current drive letters in windows (from python)
Possible Duplicate: Is there a way to list all the available drive letters in python? What's the way to retrieve currently mounted drive list in Windows incl. Thumb drives, and mapped networks. i currently do drives=[] for c in string.lowercase: if os.path.isdir(c+':'): drives.append(c+':') which looks ugly, but is completely KISS proof :)
[ "win32api.GetLogicalDriveStrings().split(\"\\x00\")\n\n" ]
[ 3 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0000956971_python_windows.txt
Q: Clean Python Regular Expressions Is there a cleaner way to write long regex patterns in python? I saw this approach somewhere but regex in python doesn't allow lists. patterns = [ re.compile(r'<!--([^->]|(-+[^->])|(-?>))*-{2,}>'), re.compile(r'\n+|\s{2}') ] A: You can use verbose mode to write more readable regular expressions. In this mode: Whitespace within the pattern is ignored, except when in a character class or preceded by an unescaped backslash. When a line contains a '#' neither in a character class or preceded by an unescaped backslash, all characters from the leftmost such '#' through the end of the line are ignored. The following two statements are equivalent: a = re.compile(r"""\d + # the integral part \. # the decimal point \d * # some fractional digits""", re.X) b = re.compile(r"\d+\.\d*") (Taken from the documentation of verbose mode) A: Though @Ayman's suggestion about re.VERBOSE is a better idea, if all you want is what you're showing, just do: patterns = re.compile( r'<!--([^->]|(-+[^->])|(-?>))*-{2,}>' r'\n+|\s{2}' ) and Python's automatic concatenation of adjacent string literals (much like C's, btw) will do the rest;-). A: You can use comments in regex's, which make them much more readable. Taking an example from http://gnosis.cx/publish/programming/regular_expressions.html : / # identify URLs within a text file [^="] # do not match URLs in IMG tags like: # <img src="http://mysite.com/mypic.png"> http|ftp|gopher # make sure we find a resource type :\/\/ # ...needs to be followed by colon-slash-slash [^ \n\r]+ # stuff other than space, newline, tab is in URL (?=[\s\.,]) # assert: followed by whitespace/period/comma /
Clean Python Regular Expressions
Is there a cleaner way to write long regex patterns in python? I saw this approach somewhere but regex in python doesn't allow lists. patterns = [ re.compile(r'<!--([^->]|(-+[^->])|(-?>))*-{2,}>'), re.compile(r'\n+|\s{2}') ]
[ "You can use verbose mode to write more readable regular expressions. In this mode:\n\nWhitespace within the pattern is ignored, except when in a character class or preceded by an unescaped backslash.\nWhen a line contains a '#' neither in a character class or preceded by an unescaped backslash, all characters from the leftmost such '#' through the end of the line are ignored.\n\nThe following two statements are equivalent:\na = re.compile(r\"\"\"\\d + # the integral part\n \\. # the decimal point\n \\d * # some fractional digits\"\"\", re.X)\n\nb = re.compile(r\"\\d+\\.\\d*\")\n\n(Taken from the documentation of verbose mode)\n", "Though @Ayman's suggestion about re.VERBOSE is a better idea, if all you want is what you're showing, just do:\npatterns = re.compile(\n r'<!--([^->]|(-+[^->])|(-?>))*-{2,}>'\n r'\\n+|\\s{2}'\n)\n\nand Python's automatic concatenation of adjacent string literals (much like C's, btw) will do the rest;-).\n", "You can use comments in regex's, which make them much more readable. Taking an example from http://gnosis.cx/publish/programming/regular_expressions.html :\n/ # identify URLs within a text file\n [^=\"] # do not match URLs in IMG tags like:\n # <img src=\"http://mysite.com/mypic.png\">\nhttp|ftp|gopher # make sure we find a resource type\n :\\/\\/ # ...needs to be followed by colon-slash-slash\n [^ \\n\\r]+ # stuff other than space, newline, tab is in URL\n (?=[\\s\\.,]) # assert: followed by whitespace/period/comma \n/\n\n" ]
[ 31, 13, 2 ]
[]
[]
[ "list", "python", "regex" ]
stackoverflow_0000958853_list_python_regex.txt
Q: pyExcelerator or xlrd - How to FIND/SEARCH a row for the given few column data? Python communicating with EXCEL... i need to find a way so that I can find/search a row for given column datas. Now, i m scanning entire rows one by one... It would be useful, If there is some functions like FIND/SEARCH/REPLACE .... I dont see these features in pyExcelerator or xlrd modules.. I dont want to use win32com modules! it makes my tool windows based! FIND/SEARCH Excel rows through Python.... Any idea, anybody? A: @John Fouhy: [I'm the maintainer of xlwt, and author of xlrd] The spreadsheet-reading part of pyExcelerator was so severely deprecated that it vanished completely out of xlwt. To read any XLS files created by Excel 2.0 up to 11.0 (Excel 2003) or compatible software, using Python 2.1+, use xlrd That "simple optimi[sz]ation" isn't needed with xlrd: import xlrd book = xlrd.open_workbook("foo.xls") sheet = book.sheet_by_number(0) # alternatively: sheet_by_name("Budget") for row_index in xrange(sheet.nrows): for col_index in xrange(sheet.ncols): A: "Now, i m scanning entire rows one by one" What's wrong with that? "search" -- in a spreadsheet context -- is really complicated. Search values? Search formulas? Search down rows then across columns? Search specific columns only? Search specific rows only? A spreadsheet isn't simple text -- simple text processing design patterns don't apply. Spreadsheet search is hard and you're doing it correctly. There's nothing better because it's hard. A: You can't. Those tools don't offer search capabilities. You must iterate over the data in a loop and search yourself. Sorry. A: With pyExcelerator you can do a simple optimization by finding the maximum row and column indices first (and storing them), so that you iterate over (row, i) for i in range(maxcol+1) instead of iterating over all the dictionary keys. That may be the best you get, unless you want to go through and build up a dictionary mapping value to set of keys. Incidentally, if you're using pyExcelerator to write spreadsheets, be aware that it has some bugs. I've encountered one involving writing integers between 230 and 232 (or thereabouts). The original author is apparently hard to contact these days, so xlwt is a fork that fixes the (known) bugs. For writing spreadsheets, it's a drop-in replacement for pyExcelerator; you could do import xlwt as pyExcelerator and change nothing else. It doesn't read spreadsheets, though.
pyExcelerator or xlrd - How to FIND/SEARCH a row for the given few column data?
Python communicating with EXCEL... i need to find a way so that I can find/search a row for given column datas. Now, i m scanning entire rows one by one... It would be useful, If there is some functions like FIND/SEARCH/REPLACE .... I dont see these features in pyExcelerator or xlrd modules.. I dont want to use win32com modules! it makes my tool windows based! FIND/SEARCH Excel rows through Python.... Any idea, anybody?
[ "@John Fouhy: [I'm the maintainer of xlwt, and author of xlrd]\nThe spreadsheet-reading part of pyExcelerator was so severely deprecated that it vanished completely out of xlwt. To read any XLS files created by Excel 2.0 up to 11.0 (Excel 2003) or compatible software, using Python 2.1+, use xlrd\nThat \"simple optimi[sz]ation\" isn't needed with xlrd:\nimport xlrd\nbook = xlrd.open_workbook(\"foo.xls\")\nsheet = book.sheet_by_number(0) # alternatively: sheet_by_name(\"Budget\")\nfor row_index in xrange(sheet.nrows): \n for col_index in xrange(sheet.ncols):\n\n", "\"Now, i m scanning entire rows one by one\"\nWhat's wrong with that? \"search\" -- in a spreadsheet context -- is really complicated. Search values? Search formulas? Search down rows then across columns? Search specific columns only? Search specific rows only?\nA spreadsheet isn't simple text -- simple text processing design patterns don't apply.\nSpreadsheet search is hard and you're doing it correctly. There's nothing better because it's hard. \n", "You can't. Those tools don't offer search capabilities. You must iterate over the data in a loop and search yourself. Sorry.\n", "With pyExcelerator you can do a simple optimization by finding the maximum row and column indices first (and storing them), so that you iterate over (row, i) for i in range(maxcol+1) instead of iterating over all the dictionary keys. That may be the best you get, unless you want to go through and build up a dictionary mapping value to set of keys.\nIncidentally, if you're using pyExcelerator to write spreadsheets, be aware that it has some bugs. I've encountered one involving writing integers between 230 and 232 (or thereabouts). The original author is apparently hard to contact these days, so xlwt is a fork that fixes the (known) bugs. For writing spreadsheets, it's a drop-in replacement for pyExcelerator; you could do import xlwt as pyExcelerator and change nothing else. It doesn't read spreadsheets, though.\n" ]
[ 6, 2, 2, 0 ]
[]
[]
[ "excel", "pyexcelerator", "python", "search", "xlrd" ]
stackoverflow_0000778093_excel_pyexcelerator_python_search_xlrd.txt
Q: PyQt and context menu I need to create a context menu on right clicking at my window. But I really don't know how to achieve that. Are there any widgets for that, or I have to create it from the beginning? Programming language: Python Graphical lib: Qt (PyQt) A: I can't speak for python, but it's fairly easy in C++. first after creating the widget you set the policy: w->setContextMenuPolicy(Qt::CustomContextMenu); then you connect the context menu event to a slot: connect(w, SIGNAL(customContextMenuRequested(const QPoint &)), this, SLOT(ctxMenu(const QPoint &))); Finally, you implement the slot: void A::ctxMenu(const QPoint &pos) { QMenu *menu = new QMenu; menu->addAction(tr("Test Item"), this, SLOT(test_slot())); menu->exec(w->mapToGlobal(pos)); } that's how you do it in c++ , shouldn't be too different in the python API. EDIT: after looking around on google, here's the setup portion of my example in python: self.w = QWhatever(); self.w.setContextMenuPolicy(Qt.CustomContextMenu) self.connect(self.w,SIGNAL('customContextMenuRequested(QPoint)'), self.ctxMenu) A: Another example which shows how to use actions in a toolbar and context menu. class Foo( QtGui.QWidget ): def __init__(self): QtGui.QWidget.__init__(self, None) mainLayout = QtGui.QVBoxLayout() self.setLayout(mainLayout) # Toolbar toolbar = QtGui.QToolBar() mainLayout.addWidget(toolbar) # Action are added/created using the toolbar.addAction # which creates a QAction, and returns a pointer.. # .. instead of myAct = new QAction().. toolbar.AddAction(myAct) # see also menu.addAction and others self.actionAdd = toolbar.addAction("New", self.on_action_add) self.actionEdit = toolbar.addAction("Edit", self.on_action_edit) self.actionDelete = toolbar.addAction("Delete", self.on_action_delete) self.actionDelete.setDisabled(True) # Tree self.tree = QtGui.QTreeView() mainLayout.addWidget(self.tree) self.tree.setContextMenuPolicy( Qt.CustomContextMenu ) self.connect(self.tree, QtCore.SIGNAL('customContextMenuRequested(const QPoint&)'), self.on_context_menu) # Popup Menu is not visible, but we add actions from above self.popMenu = QtGui.QMenu( self ) self.popMenu.addAction( self.actionEdit ) self.popMenu.addAction( self.actionDelete ) self.popMenu.addSeparator() self.popMenu.addAction( self.actionAdd ) def on_context_menu(self, point): self.popMenu.exec_( self.tree.mapToGlobal(point) )
PyQt and context menu
I need to create a context menu on right clicking at my window. But I really don't know how to achieve that. Are there any widgets for that, or I have to create it from the beginning? Programming language: Python Graphical lib: Qt (PyQt)
[ "I can't speak for python, but it's fairly easy in C++.\nfirst after creating the widget you set the policy:\nw->setContextMenuPolicy(Qt::CustomContextMenu);\n\nthen you connect the context menu event to a slot:\nconnect(w, SIGNAL(customContextMenuRequested(const QPoint &)), this, SLOT(ctxMenu(const QPoint &)));\n\nFinally, you implement the slot:\nvoid A::ctxMenu(const QPoint &pos) {\n QMenu *menu = new QMenu;\n menu->addAction(tr(\"Test Item\"), this, SLOT(test_slot()));\n menu->exec(w->mapToGlobal(pos));\n}\n\nthat's how you do it in c++ , shouldn't be too different in the python API.\nEDIT: after looking around on google, here's the setup portion of my example in python:\nself.w = QWhatever();\nself.w.setContextMenuPolicy(Qt.CustomContextMenu)\nself.connect(self.w,SIGNAL('customContextMenuRequested(QPoint)'), self.ctxMenu)\n\n", "Another example which shows how to use actions in a toolbar and context menu.\nclass Foo( QtGui.QWidget ):\n\n def __init__(self):\n QtGui.QWidget.__init__(self, None)\n mainLayout = QtGui.QVBoxLayout()\n self.setLayout(mainLayout)\n\n # Toolbar\n toolbar = QtGui.QToolBar()\n mainLayout.addWidget(toolbar)\n\n # Action are added/created using the toolbar.addAction\n # which creates a QAction, and returns a pointer..\n # .. instead of myAct = new QAction().. toolbar.AddAction(myAct)\n # see also menu.addAction and others\n self.actionAdd = toolbar.addAction(\"New\", self.on_action_add)\n self.actionEdit = toolbar.addAction(\"Edit\", self.on_action_edit)\n self.actionDelete = toolbar.addAction(\"Delete\", self.on_action_delete)\n self.actionDelete.setDisabled(True)\n\n # Tree\n self.tree = QtGui.QTreeView()\n mainLayout.addWidget(self.tree)\n self.tree.setContextMenuPolicy( Qt.CustomContextMenu )\n self.connect(self.tree, QtCore.SIGNAL('customContextMenuRequested(const QPoint&)'), self.on_context_menu)\n\n # Popup Menu is not visible, but we add actions from above\n self.popMenu = QtGui.QMenu( self )\n self.popMenu.addAction( self.actionEdit )\n self.popMenu.addAction( self.actionDelete )\n self.popMenu.addSeparator()\n self.popMenu.addAction( self.actionAdd )\n\n def on_context_menu(self, point):\n\n self.popMenu.exec_( self.tree.mapToGlobal(point) )\n\n" ]
[ 42, 15 ]
[]
[]
[ "menu", "pyqt", "python", "qt" ]
stackoverflow_0000782255_menu_pyqt_python_qt.txt
Q: What are best practices for developing consistent libraries? I am working on developing a pair of libraries to work with a REST API. Because I need to be able to use the API in very different settings I'm currently planning to have a version in PHP (for web applications) and a second version in Python (for desktop applications, and long running processes). Are there any best practices to follow in the development of the libraries to help maintain my own sanity? A: So, the problem with developing parallel libraries in different languages is that often times different languages will have different idioms for the same task. I know this from personal experience, having ported a library from Python to PHP. Idioms aren't just naming: for example, Python has a good deal of magic you can use with getters and setters to make object properties act magical; Python has monkeypatching; Python has named parameters. With a port, you want to pick a "base" language, and then attempt to mimic all the idioms in the other language (not easy to do); for parallel development, not doing anything too tricky and catering to the least common denominator is preferable. Then bolt on the syntax sugar. A: 'Be your own client' : I've found that the technique of writing tests first is an excellent way of ensuring an API is easy to use. Writing tests first means you will be thinking like a 'consumer' of your API rather than just an implementor. A: Try to write a common unit test suite for both. Maybe by wrapping a class in one language for calling it from the other. If you can't do it, at least make sure the two versions of the tests are equivalent. A: Well, the obvious one would be to keep your naming consistent. Functions and classes should be named similarly (if not identically) in both implementations. This usually happens naturally whenever you implement an API separately in two different languages. The big ticket item though (at least in my book) is to follow language-specific idioms. For example, let's assume that I were implementing a REST API in two languages I'm more familiar with: Ruby and Scala. The Ruby version might have a class MyCompany::Foo which contains method bar_baz(). Conversely, the Scala version of the same API would have a class com.mycompany.rest.Foo with a method barBaz(). It's just naming conventions, but I find it goes a long way to helping your API to feel "at home" in a particular language, even when the design was created elsewhere. Beyond that I have only one piece of advise: document, document, document. That's easily the best way to keep your sanity when dealing with a multi-implementation API spec. A: AFAIKT there are a lot of bridges from to scripting languages. Let's take e.g Jruby, it's Ruby + Java, then there are things to embed Ruby in Python (or the other way). Then there are examples like Etoile where the base is Objective-C but also bridges to Python and Smalltalk, another approach on wide use: Wrapping C libraries, examples are libxml2, libcurl etc etc. Maybe this could be the base. Let's say your write all for Python but do implement a bridge to PHP. So you do not have that much parrallel development. Or maybe it's not the worst idea to base that stuff let's say on .NET, then you suddenly have a whole bunch of languages to your disposal which in principal should be usable from every other language on the .NET platform. A: why not use python for web applications too? there are several frameworks available: django, web2py - similar to django but many say it's simpler to use, there is also TurboGears, web.py, Pylons along the lines of bridging - you could use interprocess communication to have PHP and python application (in daemon mode) talk to each other.
What are best practices for developing consistent libraries?
I am working on developing a pair of libraries to work with a REST API. Because I need to be able to use the API in very different settings I'm currently planning to have a version in PHP (for web applications) and a second version in Python (for desktop applications, and long running processes). Are there any best practices to follow in the development of the libraries to help maintain my own sanity?
[ "So, the problem with developing parallel libraries in different languages is that often times different languages will have different idioms for the same task. I know this from personal experience, having ported a library from Python to PHP. Idioms aren't just naming: for example, Python has a good deal of magic you can use with getters and setters to make object properties act magical; Python has monkeypatching; Python has named parameters.\nWith a port, you want to pick a \"base\" language, and then attempt to mimic all the idioms in the other language (not easy to do); for parallel development, not doing anything too tricky and catering to the least common denominator is preferable. Then bolt on the syntax sugar.\n", "'Be your own client' : I've found that the technique of writing tests first is an excellent way of ensuring an API is easy to use. Writing tests first means you will be thinking like a 'consumer' of your API rather than just an implementor. \n", "Try to write a common unit test suite for both. Maybe by wrapping a class in one language for calling it from the other. If you can't do it, at least make sure the two versions of the tests are equivalent.\n", "Well, the obvious one would be to keep your naming consistent. Functions and classes should be named similarly (if not identically) in both implementations. This usually happens naturally whenever you implement an API separately in two different languages. The big ticket item though (at least in my book) is to follow language-specific idioms. For example, let's assume that I were implementing a REST API in two languages I'm more familiar with: Ruby and Scala. The Ruby version might have a class MyCompany::Foo which contains method bar_baz(). Conversely, the Scala version of the same API would have a class com.mycompany.rest.Foo with a method barBaz(). It's just naming conventions, but I find it goes a long way to helping your API to feel \"at home\" in a particular language, even when the design was created elsewhere.\nBeyond that I have only one piece of advise: document, document, document. That's easily the best way to keep your sanity when dealing with a multi-implementation API spec.\n", "AFAIKT there are a lot of bridges from to scripting languages. Let's take e.g Jruby, it's Ruby + Java, then there are things to embed Ruby in Python (or the other way). Then there are examples like Etoile where the base is Objective-C but also bridges to Python and Smalltalk, another approach on wide use: Wrapping C libraries, examples are libxml2, libcurl etc etc. Maybe this could be the base. Let's say your write all for Python but do implement a bridge to PHP. So you do not have that much parrallel development. \nOr maybe it's not the worst idea to base that stuff let's say on .NET, then you suddenly have a whole bunch of languages to your disposal which in principal should be usable from every other language on the .NET platform.\n", "why not use python for web applications too? there are several frameworks available: django, web2py - similar to django but many say it's simpler to use, there is also TurboGears, web.py, Pylons\nalong the lines of bridging - you could use interprocess communication to have PHP and python application (in daemon mode) talk to each other.\n" ]
[ 6, 2, 2, 0, 0, 0 ]
[]
[]
[ "api", "php", "python", "rest" ]
stackoverflow_0000193701_api_php_python_rest.txt
Q: How can I determine if one PGArray is included in another using SQLAlchemy sessions? I have an SqlAlchemy table like so: table = sql.Table('treeItems', META, sql.Column('id', sql.Integer(), primary_key=True), sql.Column('type', sql.String, nullable=False), sql.Column('parentId', sql.Integer, sql.ForeignKey('treeItems.id')), sql.Column('lineage', PGArray(sql.Integer)), sql.Column('depth', sql.Integer), ) Which is mapped to an object like so: orm.mapper(TreeItem, TreeItem.table, polymorphic_on=TreeItem.table.c.type, polymorphic_identity='TreeItem') I'd like to select any child node of a given node so what I'm looking for is SQL that looks like this (for a parent with pk=2): SELECT * FROM "treeItems" WHERE ARRAY[2] <@ "treeItems".lineage AND "treeItems".id != 2 ORDER BY "treeItems".lineage Here is the SqlAlchemy/Python code I use to try to get to the above SQL with little luck: arrayStr = 'ARRAY[%s]' % ','.join([str(i) for i in self.lineage]) lineageFilter = expr.text('%s <@ %s' % (arrayStr, TreeItem.table.c.lineage)) query = SESSION.query(TreeItem).filter(expr.and_(lineageFilter, TreeItem.table.c.id!=self.id)) But here is the SQL I wind up with (notice the lack of quotes around the treeItems table name in the where clause): SELECT "treeItems".id AS "treeItems_id", "treeItems".type AS "treeItems_type", "treeItems"."parentId" AS "treeItems_parentId", "treeItems".lineage AS "treeItems_lineage", "treeItems".depth AS "treeItems_depth" FROM "treeItems" WHERE ARRAY[2] <@ treeItems.lineage AND "treeItems".id != %(id_1)s So, now for the questions: Is there a better way to do this than to use the text() expression / Is there an operator or expression in SqlAlchemy that can do <@ with PGArray? How can I get the quotes to show up around my table name if I must use the text() expression? Thanks everyone! A: SQLAlchemy's clause elements have an .op() method for custom operators. What isn't available is a special clause for array literals. You can specify the array literal with literal_column: print sql.literal_column('ARRAY[2]').op('<@')(table.c.lineage) # ARRAY[2] <@ "treeItems".lineage If you want a better API for array literals, then you can create it with the sqlalchemy.ext.compiler module added in SQLAlchemy 0.5.4. A: In this particular case I noticed that the quoting in the SQL was due to the fact I was using a table name that was mixed case. Converting the table name from 'treeItems' to 'tree_items' resolved the quoting issue and I was able to get my text expression to work: expr.text('%s <@ %s' % (arrayStr, TreeItem.table.c.lineage)) It is a fix and it is nice to know that mixed case table names need to be quoted but Ants' answer remains the proper way to address the issue.
How can I determine if one PGArray is included in another using SQLAlchemy sessions?
I have an SqlAlchemy table like so: table = sql.Table('treeItems', META, sql.Column('id', sql.Integer(), primary_key=True), sql.Column('type', sql.String, nullable=False), sql.Column('parentId', sql.Integer, sql.ForeignKey('treeItems.id')), sql.Column('lineage', PGArray(sql.Integer)), sql.Column('depth', sql.Integer), ) Which is mapped to an object like so: orm.mapper(TreeItem, TreeItem.table, polymorphic_on=TreeItem.table.c.type, polymorphic_identity='TreeItem') I'd like to select any child node of a given node so what I'm looking for is SQL that looks like this (for a parent with pk=2): SELECT * FROM "treeItems" WHERE ARRAY[2] <@ "treeItems".lineage AND "treeItems".id != 2 ORDER BY "treeItems".lineage Here is the SqlAlchemy/Python code I use to try to get to the above SQL with little luck: arrayStr = 'ARRAY[%s]' % ','.join([str(i) for i in self.lineage]) lineageFilter = expr.text('%s <@ %s' % (arrayStr, TreeItem.table.c.lineage)) query = SESSION.query(TreeItem).filter(expr.and_(lineageFilter, TreeItem.table.c.id!=self.id)) But here is the SQL I wind up with (notice the lack of quotes around the treeItems table name in the where clause): SELECT "treeItems".id AS "treeItems_id", "treeItems".type AS "treeItems_type", "treeItems"."parentId" AS "treeItems_parentId", "treeItems".lineage AS "treeItems_lineage", "treeItems".depth AS "treeItems_depth" FROM "treeItems" WHERE ARRAY[2] <@ treeItems.lineage AND "treeItems".id != %(id_1)s So, now for the questions: Is there a better way to do this than to use the text() expression / Is there an operator or expression in SqlAlchemy that can do <@ with PGArray? How can I get the quotes to show up around my table name if I must use the text() expression? Thanks everyone!
[ "SQLAlchemy's clause elements have an .op() method for custom operators. What isn't available is a special clause for array literals. You can specify the array literal with literal_column:\nprint sql.literal_column('ARRAY[2]').op('<@')(table.c.lineage)\n# ARRAY[2] <@ \"treeItems\".lineage\n\nIf you want a better API for array literals, then you can create it with the sqlalchemy.ext.compiler module added in SQLAlchemy 0.5.4.\n", "In this particular case I noticed that the quoting in the SQL was due to the fact I was using a table name that was mixed case. Converting the table name from 'treeItems' to 'tree_items' resolved the quoting issue and I was able to get my text expression to work:\nexpr.text('%s <@ %s' % (arrayStr, TreeItem.table.c.lineage))\nIt is a fix and it is nice to know that mixed case table names need to be quoted but Ants' answer remains the proper way to address the issue.\n" ]
[ 4, 0 ]
[]
[]
[ "arrays", "postgresql", "python", "sqlalchemy" ]
stackoverflow_0000957762_arrays_postgresql_python_sqlalchemy.txt
Q: Find all possible factors in KenKen puzzle 'multiply' domain A KenKen puzzle is a Latin square divided into edge-connected domains: a single cell, two adjacent cells within the same row or column, three cells arranged in a row or in an ell, etc. Each domain has a label which gives a target number and a single arithmetic operation (+-*/) which is to be applied to the numbers in the cells of the domain to yield the target number. (If the domain has just one cell, there is no operator given, just a target --- the square is solved for you. If the operator is - or /, then there are just two cells in the domain.) The aim of the puzzle is to (re)construct the Latin square consistent with the domains' boundaries and labels. (I think that I have seen a puzzle with a non-unique solution just once.) The number in a cell can range from 1 to the width (height) of the puzzle; commonly, puzzles are 4 or 6 cells on a side, but consider puzzles of any size. Domains in published puzzles (4x4 or 6x6) typically have no more than 5 cells, but, again, this does not seem to be a hard limit. (However, if the puzzle had just one domain, there would be as many solutions as there are Latin squares of that dimension...) A first step to writing a KenKen solver would be to have routines that can produce the possible combinations of numbers in any domain, at first neglecting the domain's geometry. (A linear domain, like a line of three cells, cannot have duplicate numbers in the solved puzzle, but we ignore this for the moment.) I've been able to write a Python function which handles addition labels case by case: give it the width of the puzzle, the number of cells in the domain, and the target sum, and it returns a list of tuples of valid numbers adding up to the target. The multiplication case eludes me. I can get a dictionary with keys equal to the products attainable in a domain of a given size in a puzzle of a given size, with the values being lists of tuples containing the factors giving the product, but I can't work out a case-by-case routine, not even a bad one. Factoring a given product into primes seems easy, but then partitioning the list of primes into the desired number of factors stumps me. (I have meditated on Fascicle 3 of Volume 4 of Knuth's TAOCP, but I have not learned how to 'grok' his algorithm descriptions, so I do not know whether his algorithms for set partitioning would be a starting point. Understanding Knuth's descriptions could be another question!) I'm quite happy to precompute the 'multiply' dictionaries for common domain and puzzle sizes and just chalk the loading time up to overhead, but that approach would not seem an efficient way to deal with, say, puzzles 100 cells on a side and domains from 2 to 50 cells in size. A: Simplified goal: you need to enumerate all integer combinations that multiply together to form a certain product, where the number of integers is fixed. To solve this, all you need is a prime factorization of your target number, and then use a combinatorial approach to form all possible sub-products from these factors. (There are also a few other constraints of the puzzle that are easy to include once you have all possible sub-products, like no entry can be great than max_entry, and you have a fixed number of integers to use, n_boxes_in_domain.) For example, if max_entry=6, n_boxes_in_domain=3, and the target_number=20: 20 yields (2, 2, 5); which goes to (2, 2, 5) and (1, 4, 5). The trick to this is to form all possible sub-products, and the code below does this. It works by looping through the factors forming all possible single pairs, and then doing this recursively, to give all possible sets of all single or multiple pairings. (It's inefficiently, but even large numbers have a small prime factorization): def xgroup(items): L = len(items) for i in range(L-1): for j in range(1, L): temp = list(items) a = temp.pop(j) b = temp.pop(i) temp.insert(0, a*b) yield temp for x in xgroup(temp): yield x def product_combos(max_entry, n_boxes, items): r = set() if len(items)<=n_boxes: r.add(tuple(items)) for i in xgroup(items): x = i[:] x.sort() if x[-1]<=max_entry and len(x)<=n_boxes: r.add(tuple(x)) r = [list(i) for i in r] r.sort() for i in r: while len(i)<n_boxes: i.insert(0, 1) return r I'll leave it to you to generate the prime factors, but this seems to work for max_entry=6, n_boxes=3, items=(2,2,5) [2, 2, 5] [1, 4, 5] and for a harder case where, say target_number=2106 max_entry=50, n_boxes=6, items=(2,3,3,3,3,13) [2, 3, 3, 3, 3, 13] [1, 2, 3, 3, 3, 39] [1, 2, 3, 3, 9, 13] [1, 1, 2, 3, 9, 39] [1, 1, 2, 3, 13, 27] [1, 1, 2, 9, 9, 13] [1, 1, 1, 2, 27, 39] [1, 3, 3, 3, 3, 26] [1, 3, 3, 3, 6, 13] [1, 1, 3, 3, 6, 39] [1, 1, 3, 3, 9, 26] [1, 1, 3, 3, 13, 18] [1, 1, 3, 6, 9, 13] [1, 1, 1, 3, 18, 39] [1, 1, 1, 3, 26, 27] [1, 1, 1, 6, 9, 39] [1, 1, 1, 6, 13, 27] [1, 1, 1, 9, 9, 26] [1, 1, 1, 9, 13, 18]
Find all possible factors in KenKen puzzle 'multiply' domain
A KenKen puzzle is a Latin square divided into edge-connected domains: a single cell, two adjacent cells within the same row or column, three cells arranged in a row or in an ell, etc. Each domain has a label which gives a target number and a single arithmetic operation (+-*/) which is to be applied to the numbers in the cells of the domain to yield the target number. (If the domain has just one cell, there is no operator given, just a target --- the square is solved for you. If the operator is - or /, then there are just two cells in the domain.) The aim of the puzzle is to (re)construct the Latin square consistent with the domains' boundaries and labels. (I think that I have seen a puzzle with a non-unique solution just once.) The number in a cell can range from 1 to the width (height) of the puzzle; commonly, puzzles are 4 or 6 cells on a side, but consider puzzles of any size. Domains in published puzzles (4x4 or 6x6) typically have no more than 5 cells, but, again, this does not seem to be a hard limit. (However, if the puzzle had just one domain, there would be as many solutions as there are Latin squares of that dimension...) A first step to writing a KenKen solver would be to have routines that can produce the possible combinations of numbers in any domain, at first neglecting the domain's geometry. (A linear domain, like a line of three cells, cannot have duplicate numbers in the solved puzzle, but we ignore this for the moment.) I've been able to write a Python function which handles addition labels case by case: give it the width of the puzzle, the number of cells in the domain, and the target sum, and it returns a list of tuples of valid numbers adding up to the target. The multiplication case eludes me. I can get a dictionary with keys equal to the products attainable in a domain of a given size in a puzzle of a given size, with the values being lists of tuples containing the factors giving the product, but I can't work out a case-by-case routine, not even a bad one. Factoring a given product into primes seems easy, but then partitioning the list of primes into the desired number of factors stumps me. (I have meditated on Fascicle 3 of Volume 4 of Knuth's TAOCP, but I have not learned how to 'grok' his algorithm descriptions, so I do not know whether his algorithms for set partitioning would be a starting point. Understanding Knuth's descriptions could be another question!) I'm quite happy to precompute the 'multiply' dictionaries for common domain and puzzle sizes and just chalk the loading time up to overhead, but that approach would not seem an efficient way to deal with, say, puzzles 100 cells on a side and domains from 2 to 50 cells in size.
[ "Simplified goal: you need to enumerate all integer combinations that multiply together to form a certain product, where the number of integers is fixed.\nTo solve this, all you need is a prime factorization of your target number, and then use a combinatorial approach to form all possible sub-products from these factors. (There are also a few other constraints of the puzzle that are easy to include once you have all possible sub-products, like no entry can be great than max_entry, and you have a fixed number of integers to use, n_boxes_in_domain.)\nFor example, if max_entry=6, n_boxes_in_domain=3, and the target_number=20: 20 yields (2, 2, 5); which goes to (2, 2, 5) and (1, 4, 5).\nThe trick to this is to form all possible sub-products, and the code below does this. It works by looping through the factors forming all possible single pairs, and then doing this recursively, to give all possible sets of all single or multiple pairings. (It's inefficiently, but even large numbers have a small prime factorization):\ndef xgroup(items):\n L = len(items)\n for i in range(L-1):\n for j in range(1, L):\n temp = list(items)\n a = temp.pop(j)\n b = temp.pop(i)\n temp.insert(0, a*b)\n yield temp\n for x in xgroup(temp):\n yield x\n\ndef product_combos(max_entry, n_boxes, items):\n r = set()\n if len(items)<=n_boxes:\n r.add(tuple(items))\n for i in xgroup(items):\n x = i[:]\n x.sort()\n if x[-1]<=max_entry and len(x)<=n_boxes:\n r.add(tuple(x))\n r = [list(i) for i in r]\n r.sort()\n for i in r:\n while len(i)<n_boxes:\n i.insert(0, 1)\n return r\n\nI'll leave it to you to generate the prime factors, but this seems to work for\nmax_entry=6, n_boxes=3, items=(2,2,5)\n[2, 2, 5]\n[1, 4, 5]\n\nand for a harder case where, say target_number=2106 \nmax_entry=50, n_boxes=6, items=(2,3,3,3,3,13)\n[2, 3, 3, 3, 3, 13]\n[1, 2, 3, 3, 3, 39]\n[1, 2, 3, 3, 9, 13]\n[1, 1, 2, 3, 9, 39]\n[1, 1, 2, 3, 13, 27]\n[1, 1, 2, 9, 9, 13]\n[1, 1, 1, 2, 27, 39]\n[1, 3, 3, 3, 3, 26]\n[1, 3, 3, 3, 6, 13]\n[1, 1, 3, 3, 6, 39]\n[1, 1, 3, 3, 9, 26]\n[1, 1, 3, 3, 13, 18]\n[1, 1, 3, 6, 9, 13]\n[1, 1, 1, 3, 18, 39]\n[1, 1, 1, 3, 26, 27]\n[1, 1, 1, 6, 9, 39]\n[1, 1, 1, 6, 13, 27]\n[1, 1, 1, 9, 9, 26]\n[1, 1, 1, 9, 13, 18]\n\n" ]
[ 5 ]
[]
[]
[ "algorithm", "partitioning", "prime_factoring", "python" ]
stackoverflow_0000958678_algorithm_partitioning_prime_factoring_python.txt
Q: Parsing numbers in Python i want to take inputs like this 10 12 13 14 15 16 .. how to take this input , as two diffrent integers so that i can multiply them in python after every 10 and 12 there is newline A: I'm not sure I understood your problem very well, it seems you want to parse two int separated from a space. In python you do: s = raw_input('Insert 2 integers separated by a space: ') a,b = [int(i) for i in s.split(' ')] print a*b Explanation: s = raw_input('Insert 2 integers separated by a space: ') raw_input takes everything you type (until you press enter) and returns it as a string, so: >>> raw_input('Insert 2 integers separated by a space: ') Insert 2 integers separated by a space: 10 12 '10 12' In s you have now '10 12', the two int are separated by a space, we split the string at the space with >>> s.split(' ') ['10', '12'] now you have a list of strings, you want to convert them in int, so: >>> [int(i) for i in s.split(' ')] [10, 12] then you assign each member of the list to a variable (a and b) and then you do the product a*b A: f = open('inputfile.txt') for line in f.readlines(): # the next line is equivalent to: # s1, s2 = line.split(' ') # a = int(s1) # b = int(s2) a, b = map(int, line.split(' ')) print a*b A: You could use regular expressions (re-module) import re test = "10 11\n12 13" # Get this input from the files or the console matches = re.findall(r"(\d+)\s*(\d+)", test) products = [ int(a) * int(b) for a, b in matches ] # Process data print(products)
Parsing numbers in Python
i want to take inputs like this 10 12 13 14 15 16 .. how to take this input , as two diffrent integers so that i can multiply them in python after every 10 and 12 there is newline
[ "I'm not sure I understood your problem very well, it seems you want to parse two int separated from a space.\nIn python you do:\ns = raw_input('Insert 2 integers separated by a space: ')\na,b = [int(i) for i in s.split(' ')]\nprint a*b\n\nExplanation:\ns = raw_input('Insert 2 integers separated by a space: ')\n\nraw_input takes everything you type (until you press enter) and returns it as a string, so:\n>>> raw_input('Insert 2 integers separated by a space: ')\nInsert 2 integers separated by a space: 10 12\n'10 12'\n\nIn s you have now '10 12', the two int are separated by a space, we split the string at the space with\n>>> s.split(' ')\n['10', '12']\n\nnow you have a list of strings, you want to convert them in int, so:\n>>> [int(i) for i in s.split(' ')]\n[10, 12]\n\nthen you assign each member of the list to a variable (a and b) and then you do the product a*b\n", "f = open('inputfile.txt')\nfor line in f.readlines():\n # the next line is equivalent to:\n # s1, s2 = line.split(' ')\n # a = int(s1)\n # b = int(s2)\n a, b = map(int, line.split(' '))\n print a*b\n\n", "You could use regular expressions (re-module)\nimport re\n\ntest = \"10 11\\n12 13\" # Get this input from the files or the console\n\nmatches = re.findall(r\"(\\d+)\\s*(\\d+)\", test)\nproducts = [ int(a) * int(b) for a, b in matches ]\n\n# Process data\nprint(products)\n\n" ]
[ 7, 2, 0 ]
[]
[]
[ "parsing", "python" ]
stackoverflow_0000959412_parsing_python.txt
Q: How to access the parent class during initialisation in python? How do I find out which class I am initialising a decorator in? It makes sense that I wouldn't be able to find this out as the decorator is not yet bound to the class, but is there a way of getting round this? class A(object): def dec(f): # I am in class 'A' def func(cls): f(cls) return func @dec def test(self): pass I need to know which class I am (indicated by the commented line). A: I don't think this is possible. At the very moment when you define test, the class doesn't exist yet. When Python encounters class A(object): it creates a new namespace in which it runs all code that it finds in the class definition (including the definition of test() and the call to the decorator), and when it's done, it creates a new class object and puts everything into this class that was left in the namespace after the code was executed. So when the decorator is called, it doesn't know anything yet. At this moment, test is just a function. A: I don't get the question. >>> class A(object): def dec(f): def func(cls): print cls return func @dec def test(self): pass >>> a=A() >>> a.test() <__main__.A object at 0x00C56330> >>> The argument (cls) is the class, A. A: As Nadia pointed out you will need to be more specific. Python does not allow this kind of things, which means that what you are trying to do is probably something wrong. In the meantime, here is my contribution: a little story about a sailor and a frog. (use a constructor after the class initialization) class Cruise(object): def arewelostyet(self): print 'Young sailor: I think I am lost, help me :s' instance = Cruise() instance.arewelostyet() def whereami(lostfunc): """ decorator """ def decorated(*args, **kwargs): lostfunc(*args, **kwargs) print 'Frog: Crôak! thou art sailing in class', lostfunc.im_class.__name__ # don't forget to write name and doc decorated.func_name = lostfunc.func_name decorated.func_doc = lostfunc.func_name return decorated print '[i]A frog pops out of nowhere[/i]' # decorate the method: Cruise.arewelostyet = whereami(Cruise.arewelostyet) instance.arewelostyet()
How to access the parent class during initialisation in python?
How do I find out which class I am initialising a decorator in? It makes sense that I wouldn't be able to find this out as the decorator is not yet bound to the class, but is there a way of getting round this? class A(object): def dec(f): # I am in class 'A' def func(cls): f(cls) return func @dec def test(self): pass I need to know which class I am (indicated by the commented line).
[ "I don't think this is possible. At the very moment when you define test, the class doesn't exist yet.\nWhen Python encounters\nclass A(object):\n\nit creates a new namespace in which it runs all code that it finds in the class definition (including the definition of test() and the call to the decorator), and when it's done, it creates a new class object and puts everything into this class that was left in the namespace after the code was executed.\nSo when the decorator is called, it doesn't know anything yet. At this moment, test is just a function.\n", "I don't get the question.\n>>> class A(object):\n def dec(f):\n def func(cls):\n print cls\n return func\n\n @dec\n def test(self):\n pass\n\n>>> a=A()\n>>> a.test()\n<__main__.A object at 0x00C56330>\n>>> \n\nThe argument (cls) is the class, A.\n", "As Nadia pointed out you will need to be more specific. Python does not allow this kind of things, which means that what you are trying to do is probably something wrong.\nIn the meantime, here is my contribution: a little story about a sailor and a frog. (use a constructor after the class initialization)\nclass Cruise(object):\n def arewelostyet(self):\n print 'Young sailor: I think I am lost, help me :s'\n\ninstance = Cruise()\n\ninstance.arewelostyet()\n\ndef whereami(lostfunc):\n \"\"\"\n decorator\n \"\"\"\n def decorated(*args, **kwargs):\n lostfunc(*args, **kwargs)\n print 'Frog: Crôak! thou art sailing in class', lostfunc.im_class.__name__\n\n # don't forget to write name and doc\n decorated.func_name = lostfunc.func_name\n decorated.func_doc = lostfunc.func_name\n\n return decorated\n\n\nprint '[i]A frog pops out of nowhere[/i]'\n\n# decorate the method:\nCruise.arewelostyet = whereami(Cruise.arewelostyet)\n\ninstance.arewelostyet()\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "decorator", "introspection", "python" ]
stackoverflow_0000956994_decorator_introspection_python.txt
Q: My regex in python isn't recursing properly I'm suppose to capture everything inside a tag and the next lines after it, but it's suppose to stop the next time it meets a bracket. What am i doing wrong? import re #regex regex = re.compile(r""" ^ # Must start in a newline first \[\b(.*)\b\] # Get what's enclosed in brackets \n # only capture bracket if a newline is next (\b(?:.|\s)*(?!\[)) # should read: anyword that doesn't precede a bracket """, re.MULTILINE | re.VERBOSE) haystack = """ [tab1] this is captured but this is suppose to be captured too! @[this should be taken though as this is in the content] [tab2] help me write a better RE """ m = regex.findall(haystack) print m what im trying to get is: [('tab1', 'this is captured\nbut this is suppose to be captured too!\n@[this should be taken though as this is in the content]\n', '[tab2]','help me\nwrite a better RE\n')] edit: regex = re.compile(r""" ^ # Must start in a newline first \[(.*?)\] # Get what's enclosed in brackets \n # only capture bracket if a newline is next ([^\[]*) # stop reading at opening bracket """, re.MULTILINE | re.VERBOSE) this seems to work but it's also trimming the brackets inside the content. A: Python regex doesn't support recursion afaik. EDIT: but in your case this would work: regex = re.compile(r""" ^ # Must start in a newline first \[(.*?)\] # Get what's enclosed in brackets \n # only capture bracket if a newline is next ([^\[]*) # stop reading at opening bracket """, re.MULTILINE | re.VERBOSE) EDIT 2: yes, it doesn't work properly. import re regex = re.compile(r""" (?:^|\n)\[ # tag's opening bracket ([^\]\n]*) # 1. text between brackets \]\n # tag's closing bracket (.*?) # 2. text between the tags (?=\n\[[^\]\n]*\]\n|$) # until tag or end of string but don't consume it """, re.DOTALL | re.VERBOSE) haystack = """[tag1] this is captured [not a tag[ but this is suppose to be captured too! [another non-tag [tag2] help me write a better RE[[[] """ print regex.findall(haystack) I do agree with viraptor though. Regex are cool but you can't check your file for errors with them. A hybrid perhaps? :P tag_re = re.compile(r'^\[([^\]\n]*)\]$', re.MULTILINE) tags = list(tag_re.finditer(haystack)) result = {} for (mo1, mo2) in zip(tags[:-1], tags[1:]): result[mo1.group(1)] = haystack[mo1.end(1)+1:mo2.start(1)-1].strip() result[mo2.group(1)] = haystack[mo2.end(1)+1:].strip() print result EDIT 3: That's because ^ character means negative match only inside [^squarebrackets]. Everywhere else it means string start (or line start with re.MULTILINE). There's no good way for negative string matching in regex, only character. A: First of all why a regex if you're trying to parse? As you can see you cannot find the source of the problem yourself, because regex gives no feedback. Also you don't have any recursion in that RE. Make your life simple: def ini_parse(src): in_block = None contents = {} for line in src.split("\n"): if line.startswith('[') and line.endswith(']'): in_block = line[1:len(line)-1] contents[in_block] = "" elif in_block is not None: contents[in_block] += line + "\n" elif line.strip() != "": raise Exception("content out of block") return contents You get error handling with exceptions and the ability to debug execution as a bonus. Also you get a dictionary as a result and can handle duplicate sections while processing. My result: {'tab2': 'help me\nwrite a better RE\n\n', 'tab1': 'this is captured\nbut this is suppose to be captured too!\n@[this should be taken though as this is in the content]\n\n'} RE is much overused these days... A: Does this do what you want? regex = re.compile(r""" ^ # Must start in a newline first \[\b(.*)\b\] # Get what's enclosed in brackets \n # only capture bracket if a newline is next ([^[]*) """, re.MULTILINE | re.VERBOSE) This gives a list of tuples (one 2-tuple per match). If you want a flattened tuple you can write: m = sum(regex.findall(haystack), ())
My regex in python isn't recursing properly
I'm suppose to capture everything inside a tag and the next lines after it, but it's suppose to stop the next time it meets a bracket. What am i doing wrong? import re #regex regex = re.compile(r""" ^ # Must start in a newline first \[\b(.*)\b\] # Get what's enclosed in brackets \n # only capture bracket if a newline is next (\b(?:.|\s)*(?!\[)) # should read: anyword that doesn't precede a bracket """, re.MULTILINE | re.VERBOSE) haystack = """ [tab1] this is captured but this is suppose to be captured too! @[this should be taken though as this is in the content] [tab2] help me write a better RE """ m = regex.findall(haystack) print m what im trying to get is: [('tab1', 'this is captured\nbut this is suppose to be captured too!\n@[this should be taken though as this is in the content]\n', '[tab2]','help me\nwrite a better RE\n')] edit: regex = re.compile(r""" ^ # Must start in a newline first \[(.*?)\] # Get what's enclosed in brackets \n # only capture bracket if a newline is next ([^\[]*) # stop reading at opening bracket """, re.MULTILINE | re.VERBOSE) this seems to work but it's also trimming the brackets inside the content.
[ "Python regex doesn't support recursion afaik.\nEDIT: but in your case this would work:\nregex = re.compile(r\"\"\"\n ^ # Must start in a newline first\n \\[(.*?)\\] # Get what's enclosed in brackets \n \\n # only capture bracket if a newline is next\n ([^\\[]*) # stop reading at opening bracket\n \"\"\", re.MULTILINE | re.VERBOSE)\n\nEDIT 2: yes, it doesn't work properly.\nimport re\n\nregex = re.compile(r\"\"\"\n (?:^|\\n)\\[ # tag's opening bracket \n ([^\\]\\n]*) # 1. text between brackets\n \\]\\n # tag's closing bracket\n (.*?) # 2. text between the tags\n (?=\\n\\[[^\\]\\n]*\\]\\n|$) # until tag or end of string but don't consume it\n \"\"\", re.DOTALL | re.VERBOSE)\n\nhaystack = \"\"\"[tag1]\nthis is captured [not a tag[\nbut this is suppose to be captured too!\n[another non-tag\n\n[tag2]\nhelp me\nwrite a better RE[[[]\n\"\"\"\n\nprint regex.findall(haystack)\n\nI do agree with viraptor though. Regex are cool but you can't check your file for errors with them. A hybrid perhaps? :P\ntag_re = re.compile(r'^\\[([^\\]\\n]*)\\]$', re.MULTILINE)\ntags = list(tag_re.finditer(haystack))\n\nresult = {}\nfor (mo1, mo2) in zip(tags[:-1], tags[1:]):\n result[mo1.group(1)] = haystack[mo1.end(1)+1:mo2.start(1)-1].strip()\nresult[mo2.group(1)] = haystack[mo2.end(1)+1:].strip()\n\nprint result\n\nEDIT 3: That's because ^ character means negative match only inside [^squarebrackets]. Everywhere else it means string start (or line start with re.MULTILINE). There's no good way for negative string matching in regex, only character.\n", "First of all why a regex if you're trying to parse? As you can see you cannot find the source of the problem yourself, because regex gives no feedback. Also you don't have any recursion in that RE.\nMake your life simple:\ndef ini_parse(src):\n in_block = None\n contents = {}\n for line in src.split(\"\\n\"):\n if line.startswith('[') and line.endswith(']'):\n in_block = line[1:len(line)-1]\n contents[in_block] = \"\"\n elif in_block is not None:\n contents[in_block] += line + \"\\n\"\n elif line.strip() != \"\":\n raise Exception(\"content out of block\")\n return contents\n\nYou get error handling with exceptions and the ability to debug execution as a bonus. Also you get a dictionary as a result and can handle duplicate sections while processing. My result:\n{'tab2': 'help me\\nwrite a better RE\\n\\n',\n 'tab1': 'this is captured\\nbut this is suppose to be captured too!\\n@[this should be taken though as this is in the content]\\n\\n'}\n\nRE is much overused these days...\n", "Does this do what you want?\nregex = re.compile(r\"\"\"\n ^ # Must start in a newline first\n \\[\\b(.*)\\b\\] # Get what's enclosed in brackets \n \\n # only capture bracket if a newline is next\n ([^[]*)\n \"\"\", re.MULTILINE | re.VERBOSE)\n\nThis gives a list of tuples (one 2-tuple per match). If you want a flattened tuple you can write:\nm = sum(regex.findall(haystack), ())\n\n" ]
[ 3, 3, 2 ]
[]
[]
[ "python", "recursion", "regex" ]
stackoverflow_0000954989_python_recursion_regex.txt
Q: How to ensure xml.dom.minidom can parse its own output? I'm trying to serialize some data to xml in a way that can be read back. I'm doing this by manually building a DOM via xml.dom.minidom, and writing it to a file using the included writexml method. Of particular interest is how I build the text nodes. I do this by initializing a Text object and then setting its data attribute. I'm not sure why the Text object doesn't take its content in the constructor, but that's just the way it in simplemented in xml.dom.minidom. To give a concrete example, the code looks something like this: import xml.dom.minidom as dom e = dom.Element('node') t = dom.Text() t.data = "The text content" e.appendChild(t) dom.parseString(e.toxml()) This seemed reasonable to me, particularly since createTextNode itself is implemented exactly like this: def createTextNode(self, data): if not isinstance(data, StringTypes): raise TypeError, "node contents must be a string" t = Text() t.data = data t.ownerDocument = self return t The problem is that setting the data like this allows us to write text that later cannot be parsed back. To give an example, I am having difficulty with the following character: you´ll The quote is ord(180), '\xb4'. My question is, what is the correct procedure to encode this data into an xml document suck that I parse the document with minidom to restore the original tree? A: The issue you're encountering, as explained in Python's online docs, is that of Unicode encoding: Node.toxml([encoding]) Return the XML that the DOM represents as a string. With no argument, the XML header does not specify an encoding, and the result is Unicode string if the default encoding cannot represent all characters in the document. Encoding this string in an encoding other than UTF-8 is likely incorrect, since UTF-8 is the default encoding of XML. With an explicit encoding [1] argument, the result is a byte string in the specified encoding. It is recommended that this argument is always specified. To avoid UnicodeError exceptions in case of unrepresentable text data, the encoding argument should be specified as “utf-8”. So, call .toxml('utf8'), not just .toxml(), and use unicode strings as text contents, and you should be fine for a "round-trip" as you desire. For example: >>> t.data = u"The text\u0180content" >>> dom.parseString(e.toxml('utf8')).toxml('utf8') '<?xml version="1.0" encoding="utf8"?><node>The text\xc6\x80content</node>' >>>
How to ensure xml.dom.minidom can parse its own output?
I'm trying to serialize some data to xml in a way that can be read back. I'm doing this by manually building a DOM via xml.dom.minidom, and writing it to a file using the included writexml method. Of particular interest is how I build the text nodes. I do this by initializing a Text object and then setting its data attribute. I'm not sure why the Text object doesn't take its content in the constructor, but that's just the way it in simplemented in xml.dom.minidom. To give a concrete example, the code looks something like this: import xml.dom.minidom as dom e = dom.Element('node') t = dom.Text() t.data = "The text content" e.appendChild(t) dom.parseString(e.toxml()) This seemed reasonable to me, particularly since createTextNode itself is implemented exactly like this: def createTextNode(self, data): if not isinstance(data, StringTypes): raise TypeError, "node contents must be a string" t = Text() t.data = data t.ownerDocument = self return t The problem is that setting the data like this allows us to write text that later cannot be parsed back. To give an example, I am having difficulty with the following character: you´ll The quote is ord(180), '\xb4'. My question is, what is the correct procedure to encode this data into an xml document suck that I parse the document with minidom to restore the original tree?
[ "The issue you're encountering, as explained in Python's online docs, is that of Unicode encoding:\nNode.toxml([encoding])\nReturn the XML that the DOM represents as a string.\n\nWith no argument, the XML header does not specify an encoding, and the result is\nUnicode string if the default encoding cannot represent all characters in the \ndocument. Encoding this string in an encoding other than UTF-8 is likely\nincorrect, since UTF-8 is the default encoding of XML.\n\nWith an explicit encoding [1] argument, the result is a byte string in the \nspecified encoding. It is recommended that this argument is always specified.\nTo avoid UnicodeError exceptions in case of unrepresentable text data, the \nencoding argument should be specified as “utf-8”.\n\nSo, call .toxml('utf8'), not just .toxml(), and use unicode strings as text contents, and you should be fine for a \"round-trip\" as you desire. For example:\n>>> t.data = u\"The text\\u0180content\"\n>>> dom.parseString(e.toxml('utf8')).toxml('utf8')\n'<?xml version=\"1.0\" encoding=\"utf8\"?><node>The text\\xc6\\x80content</node>'\n>>> \n\n" ]
[ 3 ]
[]
[]
[ "dom", "escaping", "python", "xml" ]
stackoverflow_0000959782_dom_escaping_python_xml.txt
Q: chunk_split in python I'm trying to find a pythonic way to do this PHP code: chunk_split(base64_encode($picture)); http://us2.php.net/chunk_split chunk_split split the string into smaller chunks of 76 character long by adding a "\r\n" (RFC 2045). thank you A: chunk_split = lambda s: '\r\n'.join(s[i:min(i+76, len(s))] for i in xrange(0, len(s), 76)) A: This should do it: str.encode("base64").replace("\n", "\r\n")
chunk_split in python
I'm trying to find a pythonic way to do this PHP code: chunk_split(base64_encode($picture)); http://us2.php.net/chunk_split chunk_split split the string into smaller chunks of 76 character long by adding a "\r\n" (RFC 2045). thank you
[ "chunk_split = lambda s: '\\r\\n'.join(s[i:min(i+76, len(s))] for i in xrange(0, len(s), 76))\n\n", "This should do it:\nstr.encode(\"base64\").replace(\"\\n\", \"\\r\\n\")\n\n" ]
[ 2, 2 ]
[]
[]
[ "php", "python" ]
stackoverflow_0000959780_php_python.txt
Q: Inferring appropriate database type declarations from strings in Python I am building some Postgres tables from Python dictionaries where the {'key': 'value'} pairs correspond to column 'key' and field 'value'. These are generated from .dbf files -- I now pipe the contents of the .dbf files into a script that returns a list of dicts like: {'Warngentyp': '', 'Lon': '-81.67170', 'Zwatch_war': '0', 'State':... Currently I am putting these into a sqlite database with no type declarations, then dumping it to a .sql file, manually editing the schema, and importing to Postgres. I would love to be able to infer the correct type declarations, basically iterate over a list of strings like ['0', '3', '5'] or ['ga', 'ca', 'tn'] or ['-81.009', '135.444', '-80.000'] and generate something like 'int', 'varchar(2)', 'float'. (I would be equally happy with a Python, Postgres, or SQLite tool.) Is there a package that does this, or a straightforward way to implement it? A: Don't use eval. If someone inserts bad code, it can hose your database or server. Instead use these def isFloat(s): try: float(s) return True except (ValueError, TypeError), e: return False str.isdigit() And everything else can be a varchar A: YOU DON'T NEED TO INFER THE TYPE DECLARATIONS!!! You can derive what you want directly from the .dbf files. Each column has a name, a type code (C=Character, N=Number, D=Date (yyyymmdd), L=Logical (T/F), plus more types if the files are from Foxpro), a length (where relevant), and a number of decimal places (for type N). Whatever software that you used to dig the data out of the .dbf files needed to use that information to convert each piece of data to the appropriate Python data type. Dictionaries? Why? With a minor amount of work, that software could be modified to produce a CREATE TABLE statement based on those column definitions, plus an INSERT statement for each row of data. I presume you are using one of the several published Python DBF-reading modules. Any one of them should have the facilities that you need: open a .dbf file, get the column names, get the column type etc info, get each row of data. If you are unhappy with the module that you are using, talk to me; I have an unpublished one that as far as reading DBFs goes, combines the better features of the others, avoids the worst features, is as fast as you'll get with a pure Python implementation, handles all the Visual Foxpro datatypes and the _NullFlags pseudo-column, handles memoes, etc etc. HTH ========= Addendum: When I said you didn't need to infer types, you hadn't made it plain that you had a bunch of fields of type C which contained numbers. FIPS fields: some are with and some without leading zeroes. If you are going to use them, you face the '012' != '12' != 12 problem. I'd suggest stripping off the leading zeroes and keeping them in integer columns, restoring leading zeroes in reports or whatever if you really need to. Why are there 2 each of state fips and county fips? Population: in the sample file, almost all are integer. Four are like 40552.0000, and a reasonable number are blank/empty. You seem to regard population as important, and asked "Is it possible that some small percentage of population fields contain .... ?" Anything is possible in data. Don't wonder and speculate, investigate! I'd strongly advise you to sort your data in population order and eyeball it; you'll find that multiple places in the same state share the same population count. E.g. There are 35 places in New York state whose pop'n is stated as 8,008,278; they are spread over 6 counties. 29 of them have a PL_FIPS value of 51000; 5 have 5100 -- looks like a trailing zero problem :-( Tip for deciding between float and int: try anum = float(chars) first; if that succeeds, check if int(anum) == anum. ID: wonderful "unique ID"; 59 cases where it's not an int -- several in Canada (the website said "US cities"; is this an artifact of some unresolved border dispute?), some containing the word 'Number', and some empty. Low-hanging fruit: I would have thought that deducing that population was in fact integer was 0.1 inches above the ground :-) There's a serious flaw in that if all([int(value) ... logic: >>> all([int(value) for value in "0 1 2 3 4 5 6 7 8 9".split()]) False >>> all([int(value) for value in "1 2 3 4 5 6 7 8 9".split()]) True >>> You evidently think that you are testing that all the strings can be converted to int, but you're adding the rider "and are all non-zero". Ditto float a few lines later. IOW if there's just one zero value, you declare that the column is not integer. Even after fixing that, if there's just one empty value, you call it varchar. What I suggest is: count how many are empty (after normalising whitespace (which should include NBSP)), how many qualify as integer, how many non-integer non-empty ones qualify as float, and how many "other". Check the "other" ones; decide whether to reject or fix; repeat until happy :-) I hope some of this helps. A: You can determine integers and floats unsafely by type(eval(elem)), where elem is an element of the list. (But then you need to check elem for possible bad code) A safer way could be to do the following a = ['24.2', '.2', '2'] try: if all(elem.isdigit() for elem in a): print("int") elif all(float(elem) for elem in a): print("float") except: i = len(a[0]) if all(len(elem)==i for elem in a): print("varchar(%s)"%i) else: print "n/a" A: Thanks for the help, this is a little long for an update, here is how I combined the answers. I am starting with a list of dicts like this, generated from a dbf file: dbf_list = [{'Warngentyp': '', 'Lon': '-81.67170', 'Zwatch_war': '0', 'State':... Then a function that returns 1000 values per column to test for the best db type declaration: {'column_name':['list', 'of', 'sample', 'values'], 'col2':['1','2','3','4'... like this: def sample_fields(dicts_, number=1000): #dicts_ would be dbf_list from above sample = dict([[item, []] for item in dicts_[1]]) for dict_ in dicts_[:number]: for col_ in dict_: sample[col_].append(dict_[col_]) return sample Then you combine the Unknown and jacob approach: varchar is a good default and floats and ints are basically enough for everything else, all is clear and fast: def find_typedefs(sample_dict): #arg is output of previous function defs_ = {} for key in sample_dict: defs_[key] = 'varchar(255)' try: if all([int(value) for value in sample_dict[key]]): defs_[key] = 'int' except: try: if all([float(value) for value in sample_dict[key]]): defs_[key] = 'float' except: continue return defs_ Then format the returned dict into a create table statement, iterate over the values in the original big list and feed them into the database. It works great, I am now skipping the intermediate sqlite step, thanks again. Update for John Machin: I am using the shp2pgsql library distributed with PostGIS. It creates schema like the below with a source like this one: Column | Type | ------------+-----------------------+- gid | integer | st_fips | character varying(7) | sfips | character varying(5) | county_fip | character varying(12) | cfips | character varying(6) | pl_fips | character varying(7) | id | character varying(7) | elevation | character varying(11) | pop_1990 | integer | population | character varying(12) | name | character varying(32) | st | character varying(12) | state | character varying(16) | warngenlev | character varying(13) | warngentyp | character varying(13) | watch_warn | character varying(14) | zwatch_war | bigint | prog_disc | bigint | zprog_disc | bigint | comboflag | bigint | land_water | character varying(13) | recnum | integer | lon | numeric | lat | numeric | the_geom | geometry | There is stuff there that has to be wrong -- Fips is the federal information processing standard, and it should be an integer between 0 and something like 100,000. Population, elevation, etc. Maybe I have more of a postgres specific question, I wouldn't mind loosing a small amount of data, or pushing it into a table for errors or something, while trying to change the type on say the population field. How strict is the dbf type checking? For example I see that population per shp2pgsql is varchar(12). Is it possible that some small percentage of population fields contain something like '2,445 Est.'? If I take the approach I set out in this question, with the first thousand records, I get a schema like this: Column | Type | ------------+------------------------+- warngentyp | character varying(255) | lon | double precision | zwatch_war | character varying(255) | state | character varying(255) | recnum | character varying(255) | pop_1990 | integer | land_water | character varying(255) | elevation | integer | prog_disc | integer | comboflag | character varying(255) | sfips | integer | zprog_disc | integer | pl_fips | integer | county_fip | integer | population | integer | watch_warn | integer | name | character varying(255) | st | character varying(255) | lat | double precision | st_fips | integer | cfips | integer | id | integer | warngenlev | integer | On the other hand if I check every value in the all(['list', 'of', 'everything'...]), I get a schema more like the first one. I can tolerate a bit of data loss here -- if the entry for some town is wrong and it doesn't significantly affect the population figures, etc. I am only using an old package called dbview to pipe the dbf files into these scripts -- I am not trying to map any of the format's native capability. I assumed that shp2pgsql would have picked the low-hanging fruit in that regard. Any suggestions for either dbview or another package is welcome -- although there are other cases where I may not be working with dbf files and would need to find the best types anyway. I am also going to ask a question about postgresql to see if I can find a solution at that level.
Inferring appropriate database type declarations from strings in Python
I am building some Postgres tables from Python dictionaries where the {'key': 'value'} pairs correspond to column 'key' and field 'value'. These are generated from .dbf files -- I now pipe the contents of the .dbf files into a script that returns a list of dicts like: {'Warngentyp': '', 'Lon': '-81.67170', 'Zwatch_war': '0', 'State':... Currently I am putting these into a sqlite database with no type declarations, then dumping it to a .sql file, manually editing the schema, and importing to Postgres. I would love to be able to infer the correct type declarations, basically iterate over a list of strings like ['0', '3', '5'] or ['ga', 'ca', 'tn'] or ['-81.009', '135.444', '-80.000'] and generate something like 'int', 'varchar(2)', 'float'. (I would be equally happy with a Python, Postgres, or SQLite tool.) Is there a package that does this, or a straightforward way to implement it?
[ "Don't use eval. If someone inserts bad code, it can hose your database or server.\nInstead use these\ndef isFloat(s):\ntry:\n float(s)\n return True\nexcept (ValueError, TypeError), e:\n return False\n\n\nstr.isdigit()\n\nAnd everything else can be a varchar\n", "YOU DON'T NEED TO INFER THE TYPE DECLARATIONS!!!\nYou can derive what you want directly from the .dbf files. Each column has a name, a type code (C=Character, N=Number, D=Date (yyyymmdd), L=Logical (T/F), plus more types if the files are from Foxpro), a length (where relevant), and a number of decimal places (for type N). \nWhatever software that you used to dig the data out of the .dbf files needed to use that information to convert each piece of data to the appropriate Python data type.\nDictionaries? Why? With a minor amount of work, that software could be modified to produce a CREATE TABLE statement based on those column definitions, plus an INSERT statement for each row of data.\nI presume you are using one of the several published Python DBF-reading modules. Any one of them should have the facilities that you need: open a .dbf file, get the column names, get the column type etc info, get each row of data. If you are unhappy with the module that you are using, talk to me; I have an unpublished one that as far as reading DBFs goes, combines the better features of the others, avoids the worst features, is as fast as you'll get with a pure Python implementation, handles all the Visual Foxpro datatypes and the _NullFlags pseudo-column, handles memoes, etc etc.\nHTH\n=========\nAddendum:\nWhen I said you didn't need to infer types, you hadn't made it plain that you had a bunch of fields of type C which contained numbers.\nFIPS fields: some are with and some without leading zeroes. If you are going to use them, you face the '012' != '12' != 12 problem. I'd suggest stripping off the leading zeroes and keeping them in integer columns, restoring leading zeroes in reports or whatever if you really need to. Why are there 2 each of state fips and county fips?\nPopulation: in the sample file, almost all are integer. Four are like 40552.0000, and a reasonable number are blank/empty. You seem to regard population as important, and asked \"Is it possible that some small percentage of population fields contain .... ?\" Anything is possible in data. Don't wonder and speculate, investigate! I'd strongly advise you to sort your data in population order and eyeball it; you'll find that multiple places in the same state share the same population count. E.g. There are 35 places in New York state whose pop'n is stated as 8,008,278; they are spread over 6 counties. 29 of them have a PL_FIPS value of 51000; 5 have 5100 -- looks like a trailing zero problem :-(\nTip for deciding between float and int: try anum = float(chars) first; if that succeeds, check if int(anum) == anum.\nID: wonderful \"unique ID\"; 59 cases where it's not an int -- several in Canada (the website said \"US cities\"; is this an artifact of some unresolved border dispute?), some containing the word 'Number', and some empty.\nLow-hanging fruit: I would have thought that deducing that population was in fact integer was 0.1 inches above the ground :-)\nThere's a serious flaw in that if all([int(value) ... logic:\n>>> all([int(value) for value in \"0 1 2 3 4 5 6 7 8 9\".split()])\nFalse\n>>> all([int(value) for value in \"1 2 3 4 5 6 7 8 9\".split()])\nTrue\n>>>\n\nYou evidently think that you are testing that all the strings can be converted to int, but you're adding the rider \"and are all non-zero\". Ditto float a few lines later.\nIOW if there's just one zero value, you declare that the column is not integer.\nEven after fixing that, if there's just one empty value, you call it varchar.\nWhat I suggest is: count how many are empty (after normalising whitespace (which should include NBSP)), how many qualify as integer, how many non-integer non-empty ones qualify as float, and how many \"other\". Check the \"other\" ones; decide whether to reject or fix; repeat until happy :-)\nI hope some of this helps.\n", "You can determine integers and floats unsafely by type(eval(elem)), where elem is an element of the list. (But then you need to check elem for possible bad code)\nA safer way could be to do the following\na = ['24.2', '.2', '2']\ntry:\n if all(elem.isdigit() for elem in a):\n print(\"int\")\n elif all(float(elem) for elem in a):\n print(\"float\")\nexcept:\n i = len(a[0])\n if all(len(elem)==i for elem in a):\n print(\"varchar(%s)\"%i)\n else:\n print \"n/a\"\n\n", "Thanks for the help, this is a little long for an update, here is how I combined the answers. I am starting with a list of dicts like this, generated from a dbf file:\ndbf_list = [{'Warngentyp': '', 'Lon': '-81.67170', 'Zwatch_war': '0', 'State':...\n\nThen a function that returns 1000 values per column to test for the best db type declaration: {'column_name':['list', 'of', 'sample', 'values'], 'col2':['1','2','3','4'... like this:\ndef sample_fields(dicts_, number=1000): #dicts_ would be dbf_list from above\n sample = dict([[item, []] for item in dicts_[1]])\n for dict_ in dicts_[:number]:\n for col_ in dict_:\n sample[col_].append(dict_[col_])\n return sample\n\nThen you combine the Unknown and jacob approach: varchar is a good default and floats and ints are basically enough for everything else, all is clear and fast:\ndef find_typedefs(sample_dict): #arg is output of previous function\n defs_ = {}\n for key in sample_dict:\n defs_[key] = 'varchar(255)'\n try:\n if all([int(value) for value in sample_dict[key]]):\n defs_[key] = 'int'\n except:\n try:\n if all([float(value) for value in sample_dict[key]]):\n defs_[key] = 'float'\n except:\n continue\n return defs_\n\nThen format the returned dict into a create table statement, iterate over the values in the original big list and feed them into the database. It works great, I am now skipping the intermediate sqlite step, thanks again.\nUpdate for John Machin: I am using the shp2pgsql library distributed with PostGIS. It creates schema like the below with a source like this one:\n Column | Type | \n------------+-----------------------+-\n gid | integer |\n st_fips | character varying(7) | \n sfips | character varying(5) | \n county_fip | character varying(12) | \n cfips | character varying(6) | \n pl_fips | character varying(7) | \n id | character varying(7) | \n elevation | character varying(11) | \n pop_1990 | integer | \n population | character varying(12) | \n name | character varying(32) | \n st | character varying(12) | \n state | character varying(16) | \n warngenlev | character varying(13) | \n warngentyp | character varying(13) | \n watch_warn | character varying(14) | \n zwatch_war | bigint | \n prog_disc | bigint | \n zprog_disc | bigint | \n comboflag | bigint | \n land_water | character varying(13) | \n recnum | integer | \n lon | numeric | \n lat | numeric | \n the_geom | geometry | \n\nThere is stuff there that has to be wrong -- Fips is the federal information processing standard, and it should be an integer between 0 and something like 100,000. Population, elevation, etc. Maybe I have more of a postgres specific question, I wouldn't mind loosing a small amount of data, or pushing it into a table for errors or something, while trying to change the type on say the population field. How strict is the dbf type checking? For example I see that population per shp2pgsql is varchar(12). Is it possible that some small percentage of population fields contain something like '2,445 Est.'? If I take the approach I set out in this question, with the first thousand records, I get a schema like this:\n Column | Type |\n------------+------------------------+-\n warngentyp | character varying(255) | \n lon | double precision | \n zwatch_war | character varying(255) | \n state | character varying(255) | \n recnum | character varying(255) | \n pop_1990 | integer | \n land_water | character varying(255) | \n elevation | integer | \n prog_disc | integer | \n comboflag | character varying(255) | \n sfips | integer | \n zprog_disc | integer | \n pl_fips | integer | \n county_fip | integer | \n population | integer | \n watch_warn | integer | \n name | character varying(255) | \n st | character varying(255) | \n lat | double precision | \n st_fips | integer | \n cfips | integer | \n id | integer | \n warngenlev | integer |\n\nOn the other hand if I check every value in the all(['list', 'of', 'everything'...]), I get a schema more like the first one. I can tolerate a bit of data loss here -- if the entry for some town is wrong and it doesn't significantly affect the population figures, etc.\nI am only using an old package called dbview to pipe the dbf files into these scripts -- I am not trying to map any of the format's native capability. I assumed that shp2pgsql would have picked the low-hanging fruit in that regard. Any suggestions for either dbview or another package is welcome -- although there are other cases where I may not be working with dbf files and would need to find the best types anyway. I am also going to ask a question about postgresql to see if I can find a solution at that level.\n" ]
[ 5, 2, 1, 1 ]
[]
[]
[ "postgresql", "python", "sqlite", "types" ]
stackoverflow_0000952541_postgresql_python_sqlite_types.txt
Q: not able to start coding in python i want to code in python and i know the syntax well.. but i have got no idea how to compile and run it ..!! i mean i am from ruby , java , c , c++ background and there after saving it in a file we go to command prompt and type the command and the file name to compile and run it . then what about python ? why does python filename.py doesnt work ? and how to make a file run then ? and which python books to follow for better understanding i am using windows os .! and i dont want to run it line by line on the idle .. i want to write the whole code and then run it from windows command prompt A: If you're using Windows, you'll need to add the path to your Python executable to the Path environment variable; on Linux, and I presume Mac, this should already be done. Oh, and you don't compile python programs, they are interpreted at run time. A: If you are from Ruby background, you should be able to handle another interpreted language, which is what python is too. Good starter resource: Dive into Python A: what operating system are you using?... you dont need to compile python code its interprated. just invoke the command line interpreter followed by the name of your .py file
not able to start coding in python
i want to code in python and i know the syntax well.. but i have got no idea how to compile and run it ..!! i mean i am from ruby , java , c , c++ background and there after saving it in a file we go to command prompt and type the command and the file name to compile and run it . then what about python ? why does python filename.py doesnt work ? and how to make a file run then ? and which python books to follow for better understanding i am using windows os .! and i dont want to run it line by line on the idle .. i want to write the whole code and then run it from windows command prompt
[ "If you're using Windows, you'll need to add the path to your Python executable to the Path environment variable; on Linux, and I presume Mac, this should already be done.\nOh, and you don't compile python programs, they are interpreted at run time.\n", "If you are from Ruby background, you should be able to handle another interpreted language, which is what python is too. \nGood starter resource:\nDive into Python\n", "what operating system are you using?... you dont need to compile python code its interprated. just invoke the command line interpreter followed by the name of your .py file \n" ]
[ 4, 3, 1 ]
[]
[]
[ "python" ]
stackoverflow_0000959168_python.txt
Q: Any graphics library for unix that can draw histograms? A python program needs to draw histograms. It's ok to use 3rd party library (free). What is the best way to do that? A: You can use matplotlib. A: Gnuplot.py lets you use Gnuplot from python. A: How much power do you need? How much external weight are you willing to take on? ROOT is accessible in python using PyROOT. Heavy and a lot to learn to get the most out of it, but very powerful.
Any graphics library for unix that can draw histograms?
A python program needs to draw histograms. It's ok to use 3rd party library (free). What is the best way to do that?
[ "You can use matplotlib.\n", "Gnuplot.py lets you use Gnuplot from python. \n", "How much power do you need? How much external weight are you willing to take on? ROOT is accessible in python using PyROOT. Heavy and a lot to learn to get the most out of it, but very powerful.\n" ]
[ 9, 3, 1 ]
[]
[]
[ "graphics", "python", "unix" ]
stackoverflow_0000959702_graphics_python_unix.txt
Q: How do I use 'F' keys in gtk Accelerators? I'm trying (in python) to use gtk.Widget.add_accelerator... what should I pass as accel_key to use the F keys? Have attempted to check the docs to no avail. Thanks A: Consider using gtk.accelerator_parse(). Here is an informative post on dealing with keyboard codes in pygtk. A: Found it: key,mods=gtk.accelerator_parse("F10")
How do I use 'F' keys in gtk Accelerators?
I'm trying (in python) to use gtk.Widget.add_accelerator... what should I pass as accel_key to use the F keys? Have attempted to check the docs to no avail. Thanks
[ "Consider using gtk.accelerator_parse(). Here is an informative post on dealing with keyboard codes in pygtk.\n", "Found it:\nkey,mods=gtk.accelerator_parse(\"F10\")\n\n" ]
[ 2, 1 ]
[]
[]
[ "accelerator", "gtk", "python" ]
stackoverflow_0000960269_accelerator_gtk_python.txt
Q: Python's file.read() on Ubuntu Python's file.read() function won't read anything. It always returns '' no matter what's inside the file. What can it be? I know it must be something straightforward, but I can't figure it out. UPD: I tried with 'r' and 'w+' modes. UPD: The code was: >>> file = open('helloworld', 'w+') >>> file.read() '' Solution: It just came to me that, although a file is available for reading in 'w+' mode, Python truncates it after opening. 'r' (or 'r+') mode should be used instead. Thanks everyone. A: Caveat: I'm just guessing as to behavior that is not 'working': If you're working in the Python interpreter, and you do something like this: >>> f = open('myfile.txt', 'r') >>> f.read() ...you'll get the whole file printed to the screen. But if you do this again: >>> f.read() '' ...you get an empty string. So, if you haven't already, maybe try restarting your interpreter. From the documentation: "To read a file’s contents, call f.read(size), which reads some quantity of data and returns it as a string. size is an optional numeric argument. When size is omitted or negative, the entire contents of the file will be read and returned; it’s your problem if the file is twice as large as your machine’s memory. Otherwise, at most size bytes are read and returned. If the end of the file has been reached, f.read() will return an empty string ("")."
Python's file.read() on Ubuntu
Python's file.read() function won't read anything. It always returns '' no matter what's inside the file. What can it be? I know it must be something straightforward, but I can't figure it out. UPD: I tried with 'r' and 'w+' modes. UPD: The code was: >>> file = open('helloworld', 'w+') >>> file.read() '' Solution: It just came to me that, although a file is available for reading in 'w+' mode, Python truncates it after opening. 'r' (or 'r+') mode should be used instead. Thanks everyone.
[ "Caveat: I'm just guessing as to behavior that is not 'working':\nIf you're working in the Python interpreter, \nand you do something like this:\n>>> f = open('myfile.txt', 'r')\n>>> f.read()\n\n...you'll get the whole file printed to the screen.\nBut if you do this again:\n>>> f.read()\n''\n\n...you get an empty string.\nSo, if you haven't already, maybe try restarting your interpreter.\nFrom the documentation:\n\"To read a file’s contents, call f.read(size), which reads some quantity of data and returns it as a string. size is an optional numeric argument. When size is omitted or negative, the entire contents of the file will be read and returned; it’s your problem if the file is twice as large as your machine’s memory. Otherwise, at most size bytes are read and returned. If the end of the file has been reached, f.read() will return an empty string (\"\").\"\n" ]
[ 2 ]
[]
[]
[ "file", "python", "ubuntu" ]
stackoverflow_0000960487_file_python_ubuntu.txt
Q: python3.0: imputils Why was the imputil module removed from python3.0 and what should be used in its place? A: In Python 3.1, there is a module called importlib, which should be a superior replacement for imputil. A: According to PEP 3108, it was rarely used, undocumented and never updated to support absolute imports.
python3.0: imputils
Why was the imputil module removed from python3.0 and what should be used in its place?
[ "In Python 3.1, there is a module called importlib, which should be a superior replacement for imputil.\n", "According to PEP 3108, it was rarely used, undocumented and never updated to support absolute imports.\n" ]
[ 10, 9 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0000960646_python_python_3.x.txt
Q: PEP 302 Example: New Import Hooks Where can I find an example implementation of the "New Import Hooks" described in PEP 302? I would like to implement a custom finder and loader in the most forward compatible way possible. In other words, the implementation should work in python 2.x and 3.x. A: You can find thousands of open-source examples e.g. with a google code search, here it is: http://www.google.com/codesearch?hl=en&lr=&q="imp.find_module"+"imp.load_module"&sbtn=Search Edit: as the questioner clarified he's looking for example of implementation, not use, a better URL for the search is: http://www.google.com/codesearch?hl=en&sa=N&q="path_hooks"++lang:python&ct=rr&cs_r=lang:python One readable example (though NOT suitable for production use, as the reddit discussion points out!) is urlimport. As for supporting Python 2 and Python 3 at the same time, that sounds ambitious -- I don't know of any existing import hook which claims to. In your shoes, I'd start with offering full support for Python 2.6, then once that's working (and has a good battery of tests and makes nary a peep with the -3 switch), I'd 2to3 the sources and see if anything breaks (if so, find out why, fix the 2.6 sources, iterate).
PEP 302 Example: New Import Hooks
Where can I find an example implementation of the "New Import Hooks" described in PEP 302? I would like to implement a custom finder and loader in the most forward compatible way possible. In other words, the implementation should work in python 2.x and 3.x.
[ "You can find thousands of open-source examples e.g. with a google code search, here it is:\nhttp://www.google.com/codesearch?hl=en&lr=&q=\"imp.find_module\"+\"imp.load_module\"&sbtn=Search\n\nEdit: as the questioner clarified he's looking for example of implementation, not use, a better URL for the search is:\nhttp://www.google.com/codesearch?hl=en&sa=N&q=\"path_hooks\"++lang:python&ct=rr&cs_r=lang:python\n\nOne readable example (though NOT suitable for production use, as the reddit discussion points out!) is urlimport.\nAs for supporting Python 2 and Python 3 at the same time, that sounds ambitious -- I don't know of any existing import hook which claims to. In your shoes, I'd start with offering full support for Python 2.6, then once that's working (and has a good battery of tests and makes nary a peep with the -3 switch), I'd 2to3 the sources and see if anything breaks (if so, find out why, fix the 2.6 sources, iterate).\n" ]
[ 3 ]
[]
[]
[ "http_status_code_302", "import_hooks", "python", "python_3.x" ]
stackoverflow_0000960832_http_status_code_302_import_hooks_python_python_3.x.txt
Q: how to multiply two different array of integers in python? i have taken input in two different lists by splitting a line having integers 1 2 for eg 1 2 3 4 so now i have split it and kept it in lists , and want to multiply them like 1*3 +2*4, but when i try to do it , its giving me that it can only multiply integers and not lists !! help here can't multiply sequence by non-int of type 'list'.. that's the error i am getting – when i do c=sum(i*j for i, j in zip(a,b)) ... t=raw_input() d =[] for j in range(0,int(t)): c=0 n=raw_input() s = raw_input() s1=raw_input() a=[] b=[] a.append( [int(i) for i in s.split(' ')]) b.append([int(i) for i in s.split(' ')]) d.append(sum(i*j for i, j in zip(a,b))) for i in d: print i that's my code A: You need: >>> a = [1,2] >>> b = [3,4] >>> sum(i*j for i, j in zip(a,b)) 11 A: You can do it in a pythonic way using sum, map and a lambda expression. >>> a = [1,2] >>> b = [3,4] >>> prod = lambda a, b: a*b >>> sum(map(prod, a, b)) 11 the lambda a, b: a*b bit also has a special name in python, operator.mul >>> import operator >>> sum(map(operator.mul, a, b)) 11 A: Is this what you want? t=raw_input() d =[] for j in range(0,int(t)): #c=0 #n=raw_input() s = raw_input() s1 =raw_input() a = [int(i) for i in s.split(' ')] b = [int(i) for i in s1.split(' ')] # <--s1 not s d.append(sum(i*j for i, j in zip(a,b))) for i in d: print i A: It has nothing to do with multiplying integers, but you should probably be using the extend method: a.extend([int(i) for i in s.split(' ')]) b.extend([int(i) for i in s.split(' ')]) append just tacks its argument on to the list as its last element. Since you are passing a list to append, you wind up with a list of lists. extend, however, takes the elements of the argument list and adds them to the end of the "source" list, which is what it seems like you mean to do. (There are a bunch of other things you could do to fix up this code but that's probably a matter for a different question)
how to multiply two different array of integers in python?
i have taken input in two different lists by splitting a line having integers 1 2 for eg 1 2 3 4 so now i have split it and kept it in lists , and want to multiply them like 1*3 +2*4, but when i try to do it , its giving me that it can only multiply integers and not lists !! help here can't multiply sequence by non-int of type 'list'.. that's the error i am getting – when i do c=sum(i*j for i, j in zip(a,b)) ... t=raw_input() d =[] for j in range(0,int(t)): c=0 n=raw_input() s = raw_input() s1=raw_input() a=[] b=[] a.append( [int(i) for i in s.split(' ')]) b.append([int(i) for i in s.split(' ')]) d.append(sum(i*j for i, j in zip(a,b))) for i in d: print i that's my code
[ "You need:\n>>> a = [1,2]\n>>> b = [3,4]\n>>> sum(i*j for i, j in zip(a,b))\n11\n\n", "You can do it in a pythonic way using sum, map and a lambda expression. \n>>> a = [1,2]\n>>> b = [3,4]\n>>> prod = lambda a, b: a*b\n>>> sum(map(prod, a, b))\n11\n\nthe lambda a, b: a*b bit also has a special name in python, operator.mul\n>>> import operator\n>>> sum(map(operator.mul, a, b))\n11\n\n", "Is this what you want?\nt=raw_input()\nd =[]\nfor j in range(0,int(t)):\n #c=0\n #n=raw_input()\n s = raw_input()\n s1 =raw_input()\n a = [int(i) for i in s.split(' ')]\n b = [int(i) for i in s1.split(' ')] # <--s1 not s\n d.append(sum(i*j for i, j in zip(a,b)))\n\nfor i in d:\n print i\n\n", "It has nothing to do with multiplying integers, but you should probably be using the extend method:\n a.extend([int(i) for i in s.split(' ')])\n b.extend([int(i) for i in s.split(' ')])\n\nappend just tacks its argument on to the list as its last element. Since you are passing a list to append, you wind up with a list of lists. extend, however, takes the elements of the argument list and adds them to the end of the \"source\" list, which is what it seems like you mean to do.\n(There are a bunch of other things you could do to fix up this code but that's probably a matter for a different question)\n" ]
[ 8, 3, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0000961326_python.txt
Q: Identical Error Codes I use python 2.4.1 on Linux, and a python package written inside the company I work in, for establishing a connection between 2 hosts for test purposes. Upon establishing the connection the side defined as the client side failed when calling socket.connect with the correct parameters (I checked) with the error code 111. After searching the web for this error means, I learned that it means that the connection was actively refused. But the code in the package for establishing the connection is supposed to deal with it, only it knows 10061 as the error code for this same error: The connection is refused. Could it be that there are identical error codes for the same logical errors? Could it be that 111 is a system error of the Linux OS, as 10061 is python's or even another OS? Even so, isn't the entire concept of error codes to unify the logical errors with the same codes? Should I simply add the 111 error code to the handling condition? A: It appears Python is exposing the error code from the OS - the interpretation of the code is OS-dependent. 111 is ECONNREFUSED on many Linux systems, and on Cygwin. 146 is ECONNREFUSED on Solaris. 10061 is WSAECONNREFUSED in winerror.h - it's the Windows Socket API's version of ECONNREFUSED. No doubt on other systems, it's different again. The correct way to handle this is use symbolic comparisons based on the OS's definition of ECONNREFUSED; that's the way you do it in C, for example. In other words, have a constant called ECONNREFUSED that has the value of ECONNREFUSED for that platform, in a platform-specific library (which will be necessary to link to the OS's socket primitives in any case), and compare error codes with the ECONNREFUSED constant, rather than magic numbers. I don't know what Python's standard approach to OS error codes is. I suspect it's not terribly well thought out.
Identical Error Codes
I use python 2.4.1 on Linux, and a python package written inside the company I work in, for establishing a connection between 2 hosts for test purposes. Upon establishing the connection the side defined as the client side failed when calling socket.connect with the correct parameters (I checked) with the error code 111. After searching the web for this error means, I learned that it means that the connection was actively refused. But the code in the package for establishing the connection is supposed to deal with it, only it knows 10061 as the error code for this same error: The connection is refused. Could it be that there are identical error codes for the same logical errors? Could it be that 111 is a system error of the Linux OS, as 10061 is python's or even another OS? Even so, isn't the entire concept of error codes to unify the logical errors with the same codes? Should I simply add the 111 error code to the handling condition?
[ "It appears Python is exposing the error code from the OS - the interpretation of the code is OS-dependent.\n111 is ECONNREFUSED on many Linux systems, and on Cygwin.\n146 is ECONNREFUSED on Solaris.\n10061 is WSAECONNREFUSED in winerror.h - it's the Windows Socket API's version of ECONNREFUSED.\nNo doubt on other systems, it's different again.\nThe correct way to handle this is use symbolic comparisons based on the OS's definition of ECONNREFUSED; that's the way you do it in C, for example. In other words, have a constant called ECONNREFUSED that has the value of ECONNREFUSED for that platform, in a platform-specific library (which will be necessary to link to the OS's socket primitives in any case), and compare error codes with the ECONNREFUSED constant, rather than magic numbers.\nI don't know what Python's standard approach to OS error codes is. I suspect it's not terribly well thought out.\n" ]
[ 6 ]
[]
[]
[ "error_handling", "python", "sockets" ]
stackoverflow_0000961465_error_handling_python_sockets.txt
Q: Python3 Http Web Server: virtual hosts I am writing an rather simple http web server in python3. The web server needs to be simple - only basic reading from config files, etc. I am using only standard libraries and for now it works rather ok. There is only one requirement for this project, which I can't implement on my own - virtual hosts. I need to have at least two virtual hosts, defined in config files. The problem is, that I can't find a way how can I implement them in python. Does anyone have any guides, articles, maybe some simple implementation how can this be done? I would be grateful for any help. A: Virtual hosts work by obeying the Host: header in the HTTP request. Just read the headers of the request, and take action based on the value of the Host: header A: For a simple HTTP web server, you can start with the WSGI reference implementation: wsgiref is a reference implementation of the WSGI specification that can be used to add WSGI support to a web server or framework. It provides utilities for manipulating WSGI environment variables and response headers, base classes for implementing WSGI servers, a demo HTTP server that serves WSGI applications,... Modifying the example server to check the HTTP_HOST header, here is a simple app that responds, depending on the virtual host, with a different text. (Extending the example to use a configuration file is left as an exercise). import wsgiref from wsgiref.simple_server import make_server def my_app(environ,start_response): from io import StringIO stdout = StringIO() host = environ["HTTP_HOST"].split(":")[0] if host == "127.0.0.1": print("This is virtual host 1", file=stdout) elif host == "localhost": print("This is virtual host 2", file=stdout) else: print("Unknown virtual host", file=stdout) print("Hello world!", file=stdout) print(file=stdout) start_response(b"200 OK", [(b'Content-Type',b'text/plain; charset=utf-8')]) return [stdout.getvalue().encode("utf-8")] def test1(): httpd = make_server('', 8000, my_app) print("Serving HTTP on port 8000...") # Respond to requests until process is killed httpd.serve_forever()
Python3 Http Web Server: virtual hosts
I am writing an rather simple http web server in python3. The web server needs to be simple - only basic reading from config files, etc. I am using only standard libraries and for now it works rather ok. There is only one requirement for this project, which I can't implement on my own - virtual hosts. I need to have at least two virtual hosts, defined in config files. The problem is, that I can't find a way how can I implement them in python. Does anyone have any guides, articles, maybe some simple implementation how can this be done? I would be grateful for any help.
[ "Virtual hosts work by obeying the Host: header in the HTTP request.\nJust read the headers of the request, and take action based on the value of the Host: header\n", "For a simple HTTP web server, you can start with the WSGI reference implementation:\n\nwsgiref is a reference implementation of the WSGI specification that can be used to add WSGI support to a web server or framework. It provides utilities for manipulating WSGI environment variables and response headers, base classes for implementing WSGI servers, a demo HTTP server that serves WSGI applications,...\n\nModifying the example server to check the HTTP_HOST header, here is a simple app that responds, depending on the virtual host, with a different text. (Extending the example to use a configuration file is left as an exercise).\nimport wsgiref\nfrom wsgiref.simple_server import make_server\n\ndef my_app(environ,start_response):\n from io import StringIO\n stdout = StringIO()\n host = environ[\"HTTP_HOST\"].split(\":\")[0]\n if host == \"127.0.0.1\":\n print(\"This is virtual host 1\", file=stdout)\n elif host == \"localhost\":\n print(\"This is virtual host 2\", file=stdout)\n else:\n print(\"Unknown virtual host\", file=stdout)\n\n print(\"Hello world!\", file=stdout)\n print(file=stdout)\n start_response(b\"200 OK\", [(b'Content-Type',b'text/plain; charset=utf-8')])\n return [stdout.getvalue().encode(\"utf-8\")]\n\ndef test1():\n httpd = make_server('', 8000, my_app)\n print(\"Serving HTTP on port 8000...\")\n\n # Respond to requests until process is killed\n httpd.serve_forever()\n\n" ]
[ 10, 5 ]
[]
[]
[ "http", "python", "python_3.x", "virtualhost" ]
stackoverflow_0000781466_http_python_python_3.x_virtualhost.txt
Q: Python function calls are bleeding scope, stateful, failing to initialize parameters? Before I have the audacity to file a bug report, I thought I'd check my assumptions among wiser Pythonistas here. I encountered a baffling case today, so I whittled it down to a toy example, shown below: #!/usr/bin/env python # -*- coding: UTF-8 -*- """ A little script to demonstrate that a function won't re-initialize its list parameters between calls, but instead allows them to retain state. """ def bleedscope(a=[], b=[]): """ On each call, unless explicitly passed, both `a` and `b` should be initialized as empty lists. """ c = a if b: c.extend(b) return len(c) x = bleedscope(b=[1]) print x # Should be 1, as expected. x = bleedscope(b=[2]) print x # Expect also to be 1, but it's 2. `a` is retained. x = bleedscope(a=[1]) print x # Now 1 as expected. x = bleedscope(b=[3]) print x # 1 as expected? No, it's 3! Insanity! I thought function arguments were local in scope to the function, and were garbage-collected at the end of a function call, never to retain state between them. I have tested the above script on Python 2.5.2 and Python 2.6.1, though, and my understanding does not the results. Argument a certainly retains state between most of these calls; the most perplexing one being the final call to bleedscope, where it skips the state of the previous call and goes back to the state at the end of the second (i.e., [1, 2]). [I suggest running this in your favorite debugger to see for yourself. If you don't have one, I suggest Winpdb as a solid FOSS standalone Python debugger.] What's going on here? A: In Python default parameter values only get initialized when the def call is parsed. In the case of an object (such as your lists), it gets reused between calls. Take a look at this article about it, which also provides the necessary workaround: http://effbot.org/zone/default-values.htm A: This is your problem: def bleedscope(a=[], b=[]): it should be def bleedscope(a=None, b=None): if a is None: a = [] if b is None: b = [] The default parameters are only executed once when the function is parsed, thus using the same 2 lists every time. A: There is an explanation in the FAQ A: Funnily enough, your input and your output are quite similar, for totally accidental reasons. Actually what happens with Python is that the default values for a and b in your method declaration are "static" values. They are instanciated once at the method definition. So your default "a" is pushed each time you do not pass an "a" as argument. Put a "print a" at the beginning of your method to see that happen.
Python function calls are bleeding scope, stateful, failing to initialize parameters?
Before I have the audacity to file a bug report, I thought I'd check my assumptions among wiser Pythonistas here. I encountered a baffling case today, so I whittled it down to a toy example, shown below: #!/usr/bin/env python # -*- coding: UTF-8 -*- """ A little script to demonstrate that a function won't re-initialize its list parameters between calls, but instead allows them to retain state. """ def bleedscope(a=[], b=[]): """ On each call, unless explicitly passed, both `a` and `b` should be initialized as empty lists. """ c = a if b: c.extend(b) return len(c) x = bleedscope(b=[1]) print x # Should be 1, as expected. x = bleedscope(b=[2]) print x # Expect also to be 1, but it's 2. `a` is retained. x = bleedscope(a=[1]) print x # Now 1 as expected. x = bleedscope(b=[3]) print x # 1 as expected? No, it's 3! Insanity! I thought function arguments were local in scope to the function, and were garbage-collected at the end of a function call, never to retain state between them. I have tested the above script on Python 2.5.2 and Python 2.6.1, though, and my understanding does not the results. Argument a certainly retains state between most of these calls; the most perplexing one being the final call to bleedscope, where it skips the state of the previous call and goes back to the state at the end of the second (i.e., [1, 2]). [I suggest running this in your favorite debugger to see for yourself. If you don't have one, I suggest Winpdb as a solid FOSS standalone Python debugger.] What's going on here?
[ "In Python default parameter values only get initialized when the def call is parsed. In the case of an object (such as your lists), it gets reused between calls. Take a look at this article about it, which also provides the necessary workaround:\nhttp://effbot.org/zone/default-values.htm\n", "This is your problem:\ndef bleedscope(a=[], b=[]):\n\nit should be\ndef bleedscope(a=None, b=None):\n if a is None: a = []\n if b is None: b = []\n\nThe default parameters are only executed once when the function is parsed, thus using the same 2 lists every time.\n", "There is an explanation in the FAQ\n", "Funnily enough, your input and your output are quite similar, for totally accidental reasons.\nActually what happens with Python is that the default values for a and b in your method declaration are \"static\" values. They are instanciated once at the method definition. So your default \"a\" is pushed each time you do not pass an \"a\" as argument.\nPut a \"print a\" at the beginning of your method to see that happen.\n" ]
[ 15, 8, 5, 1 ]
[]
[]
[ "python", "scope" ]
stackoverflow_0000959113_python_scope.txt
Q: How to enumerate a list of non-string objects in Python? There is a nice class Enum from enum, but it only works for strings. I'm currently using: for index in range(len(objects)): # do something with index and objects[index] I guess it's not the optimal solution due to the premature use of len. How is it possible to do it more efficiently? A: Here is the pythonic way to write this loop: for index, obj in enumerate(objects): # Use index, obj. enumerate works on any sequence regardless of the types of its elements. It is a builtin function. Edit: After running some timeit tests using Python 2.5, I found enumerate to be slightly slower: >>> timeit.Timer('for i in xrange(len(seq)): x = i + seq[i]', 'seq = range(100)').timeit() 10.322299003601074 >>> timeit.Timer('for i, e in enumerate(seq): x = i + e', 'seq = range(100)').timeit() 11.850601196289062
How to enumerate a list of non-string objects in Python?
There is a nice class Enum from enum, but it only works for strings. I'm currently using: for index in range(len(objects)): # do something with index and objects[index] I guess it's not the optimal solution due to the premature use of len. How is it possible to do it more efficiently?
[ "Here is the pythonic way to write this loop:\nfor index, obj in enumerate(objects):\n # Use index, obj.\n\nenumerate works on any sequence regardless of the types of its elements. It is a builtin function.\nEdit:\nAfter running some timeit tests using Python 2.5, I found enumerate to be slightly slower:\n>>> timeit.Timer('for i in xrange(len(seq)): x = i + seq[i]', 'seq = range(100)').timeit()\n10.322299003601074\n>>> timeit.Timer('for i, e in enumerate(seq): x = i + e', 'seq = range(100)').timeit()\n11.850601196289062\n\n" ]
[ 13 ]
[]
[]
[ "enumeration", "loops", "python", "sequence" ]
stackoverflow_0000962082_enumeration_loops_python_sequence.txt
Q: Making a plain ASCII/UTF-8 request/stream HTTP POST request in Python? I'm reading some documentation on a service I'm trying to use, and it reads something like this: All requests must be sent using HTTP Post. The XML engine only accepts plain ASCII (text) UTF-8 requests/streams. Encoded streams are not acceptable. All requests/responses are XML. But I really just don't understand what it's asking for. From what I've been reading on HTTP POST in Python, you still need to encode key=value pairs to make a request, where it sounds like they just want the plain XML itself (as a multipart, maybe? I am very confused). Are they giving me enough information and I'm just fundamentally misunderstanding their documentation, or should I ask for more details? A: using urllib2.Request import urllib2 req = urllib2.Request("http://foo.com/post_here", "<xml data to post>") response = urllib2.urlopen(req) the_page = response.read() A: "plain ASCII UTF-8" is a contradiction in terms, IMHO -- ASCII is a subset of UTF-8, though. Try sending UTF-8 including some "special" (non-ASCII) character and see what happens (or, if you can, do ask them to reword said contradition-in-terms!-).
Making a plain ASCII/UTF-8 request/stream HTTP POST request in Python?
I'm reading some documentation on a service I'm trying to use, and it reads something like this: All requests must be sent using HTTP Post. The XML engine only accepts plain ASCII (text) UTF-8 requests/streams. Encoded streams are not acceptable. All requests/responses are XML. But I really just don't understand what it's asking for. From what I've been reading on HTTP POST in Python, you still need to encode key=value pairs to make a request, where it sounds like they just want the plain XML itself (as a multipart, maybe? I am very confused). Are they giving me enough information and I'm just fundamentally misunderstanding their documentation, or should I ask for more details?
[ "using urllib2.Request\nimport urllib2\nreq = urllib2.Request(\"http://foo.com/post_here\", \"<xml data to post>\")\nresponse = urllib2.urlopen(req)\nthe_page = response.read()\n\n", "\"plain ASCII UTF-8\" is a contradiction in terms, IMHO -- ASCII is a subset of UTF-8, though. Try sending UTF-8 including some \"special\" (non-ASCII) character and see what happens (or, if you can, do ask them to reword said contradition-in-terms!-).\n" ]
[ 2, 1 ]
[]
[]
[ "http", "python" ]
stackoverflow_0000962179_http_python.txt
Q: Different behavior of python logging module when using mod_python We have a nasty problem where we see that the python logging module is behaving differently when running with mod_python on our servers. When executing the same code in the shell, or in django with the runserver command or with mod_wsgi, the behavior is correct: import logging logger = logging.getLogger('site-errors') logging.debug('logger=%s' % (logger.__dict__)) logging.debug('logger.parent=%s' % (logger.parent.__dict__)) logger.error('some message that is not logged.') We then the following logging: 2009-05-28 10:36:43,740,DEBUG,error_middleware.py:31,[logger={'name': 'site-errors', 'parent': <logging.RootLogger instance at 0x85f8aac>, 'handlers': [], 'level': 0, 'disabled': 0, 'manager': <logging.Manager instance at 0x85f8aec>, 'propagate': 1, 'filters': []}] 2009-05-28 10:36:43,740,DEBUG,error_middleware.py:32,[logger.parent={'name': 'root', 'parent': None, 'handlers': [<logging.StreamHandler instance at 0x8ec612c>, <logging.handlers.RotatingFileHandler instance at 0x8ec616c>], 'level': 10, 'disabled': 0, 'propagate': 1, 'filters': []}] As one can see, no handlers or level is set for the child logger 'site-errors'. The logging configuration is done in the settings.py: MONITOR_LOGGING_CONFIG = ROOT + 'error_monitor_logging.conf' import logging import logging.config logging.config.fileConfig(MONITOR_LOGGING_CONFIG) if CONFIG == CONFIG_DEV: DB_LOGLEVEL = logging.INFO else: DB_LOGLEVEL = logging.WARNING The second problem is that we also add a custom handler in the __init__.py that resides that in the folder as error_middleware.py: import logging from django.conf import settings from db_log_handler import DBLogHandler handler = DBLogHandler() handler.setLevel(settings.DB_LOGLEVEL) logging.root.addHandler(handler) The custom handler cannot be seen in the logging! If someone has idea where the problem lies, please let us know! Don't hesistate to ask for additonal information. That will certainly help to solve the problem. A: It may be better if you do not configure logging in settings.py. We configure your logging in our root urls.py. This seems to work out better. I haven't read enough Django source to know why, precisely, it's better, but it's working out well for us. I would add custom handlers here, also. Also, look closely at mod_wsgi. It seems to behave much better than mod_python. A: The problem is not solved by using mod_wsgi. I could solve the problem by placing the complete configuration into one file. Mixing file and code configuration seems to create problems with apache (whether using mod_wsgi or mod_python). To use a custom logging handler with file configuration, I had to do the following: import logging import logging.config logging.custhandlers = sitemonitoring.db_log_handler logging.config.fileConfig(settings.MONITORING_FILE_CONFIG) From the settings.py I cannot import the sitemonitoring.db_log_handler, so I have to place this code in the root urls.py. In the config file, I refer to the DBLogHandler with the following statement [handler_db] class=custhandlers.DBLogHandler() level=ERROR args=(,) PS: Note that the custhandler 'attribute' is created dynamically and can have another name. This is an advantage of using a dynamic language. A: You don't appear to have posted all the relevant information - for example, where is your logging configuration file? You say that: When executing the same code in the shell, or in django with the runserver command or with mod_wsgi, the behavior is correct You don't make clear whether the logging output you showed is from one of these environments, or whether it's from a mod_python run. It doesn't look wrong - in your code you added handlers to the root, not to logger 'site-errors'. You also set a level on the handler, not the logger - so you wouldn't expect to see a level set for the 'site-errors' logger in the logging output, neh? Levels can be set on both loggers and handlers and they are not the same, though they filter out events in the same way. The issue about custom handlers is easily explained if you look at the logging documentation for configuration, see http://docs.python.org/library/logging.html (search for "the class entry indicates") This explains that any handler class described in the configuration file is eval()'d in the logging packages namespace. So, by binding logging.custhandlers to your custom handlers module and then stating "custhandlers.MyCustomClass" in the config file, the eval() produces the expected result. You could just as well have done logging.sitemonitoring = sitemonitoring and specified the handler class as sitemonitoring.db_log_handler.DBLogHandler which would work just as well (as long as the db_log_handler subpackage has been imported already). BTW the reason why people sometimes have problems configuring logging in settings.py is due to Django's import magic causing circular import problems. I generally configure logging in settings.py and it works fine unless you want to import certain bits of Django (e.g. in django.db - because the app import logic is in django.db, you can run into circular import issues if you try to import django.db.x in settings.py).
Different behavior of python logging module when using mod_python
We have a nasty problem where we see that the python logging module is behaving differently when running with mod_python on our servers. When executing the same code in the shell, or in django with the runserver command or with mod_wsgi, the behavior is correct: import logging logger = logging.getLogger('site-errors') logging.debug('logger=%s' % (logger.__dict__)) logging.debug('logger.parent=%s' % (logger.parent.__dict__)) logger.error('some message that is not logged.') We then the following logging: 2009-05-28 10:36:43,740,DEBUG,error_middleware.py:31,[logger={'name': 'site-errors', 'parent': <logging.RootLogger instance at 0x85f8aac>, 'handlers': [], 'level': 0, 'disabled': 0, 'manager': <logging.Manager instance at 0x85f8aec>, 'propagate': 1, 'filters': []}] 2009-05-28 10:36:43,740,DEBUG,error_middleware.py:32,[logger.parent={'name': 'root', 'parent': None, 'handlers': [<logging.StreamHandler instance at 0x8ec612c>, <logging.handlers.RotatingFileHandler instance at 0x8ec616c>], 'level': 10, 'disabled': 0, 'propagate': 1, 'filters': []}] As one can see, no handlers or level is set for the child logger 'site-errors'. The logging configuration is done in the settings.py: MONITOR_LOGGING_CONFIG = ROOT + 'error_monitor_logging.conf' import logging import logging.config logging.config.fileConfig(MONITOR_LOGGING_CONFIG) if CONFIG == CONFIG_DEV: DB_LOGLEVEL = logging.INFO else: DB_LOGLEVEL = logging.WARNING The second problem is that we also add a custom handler in the __init__.py that resides that in the folder as error_middleware.py: import logging from django.conf import settings from db_log_handler import DBLogHandler handler = DBLogHandler() handler.setLevel(settings.DB_LOGLEVEL) logging.root.addHandler(handler) The custom handler cannot be seen in the logging! If someone has idea where the problem lies, please let us know! Don't hesistate to ask for additonal information. That will certainly help to solve the problem.
[ "It may be better if you do not configure logging in settings.py.\nWe configure your logging in our root urls.py. This seems to work out better. I haven't read enough Django source to know why, precisely, it's better, but it's working out well for us. I would add custom handlers here, also. \nAlso, look closely at mod_wsgi. It seems to behave much better than mod_python.\n", "The problem is not solved by using mod_wsgi.\nI could solve the problem by placing the complete configuration into one file. Mixing file and code configuration seems to create problems with apache (whether using mod_wsgi or mod_python).\nTo use a custom logging handler with file configuration, I had to do the following:\nimport logging\nimport logging.config\nlogging.custhandlers = sitemonitoring.db_log_handler\nlogging.config.fileConfig(settings.MONITORING_FILE_CONFIG)\n\nFrom the settings.py I cannot import the sitemonitoring.db_log_handler, so I have to place this code in the root urls.py.\nIn the config file, I refer to the DBLogHandler with the following statement\n[handler_db]\nclass=custhandlers.DBLogHandler()\nlevel=ERROR\nargs=(,)\n\nPS: Note that the custhandler 'attribute' is created dynamically and can have another name. This is an advantage of using a dynamic language.\n", "You don't appear to have posted all the relevant information - for example, where is your logging configuration file?\nYou say that:\n\nWhen executing the same code in the\n shell, or in django with the runserver\n command or with mod_wsgi, the behavior\n is correct\n\nYou don't make clear whether the logging output you showed is from one of these environments, or whether it's from a mod_python run. It doesn't look wrong - in your code you added handlers to the root, not to logger 'site-errors'. You also set a level on the handler, not the logger - so you wouldn't expect to see a level set for the 'site-errors' logger in the logging output, neh? Levels can be set on both loggers and handlers and they are not the same, though they filter out events in the same way.\nThe issue about custom handlers is easily explained if you look at the logging documentation for configuration, see\nhttp://docs.python.org/library/logging.html (search for \"the class entry indicates\")\nThis explains that any handler class described in the configuration file is eval()'d in the logging packages namespace. So, by binding logging.custhandlers to your custom handlers module and then stating \"custhandlers.MyCustomClass\" in the config file, the eval() produces the expected result. You could just as well have done\nlogging.sitemonitoring = sitemonitoring\nand specified the handler class as\nsitemonitoring.db_log_handler.DBLogHandler\nwhich would work just as well (as long as the db_log_handler subpackage has been imported already).\nBTW the reason why people sometimes have problems configuring logging in settings.py is due to Django's import magic causing circular import problems. I generally configure logging in settings.py and it works fine unless you want to import certain bits of Django (e.g. in django.db - because the app import logic is in django.db, you can run into circular import issues if you try to import django.db.x in settings.py).\n" ]
[ 5, 0, 0 ]
[]
[]
[ "django", "logging", "mod_python", "python" ]
stackoverflow_0000919990_django_logging_mod_python_python.txt
Q: How can I speed up a web-application? (Avoid rebuilding a structure.) After having successfully build a static data structure (see here), I would want to avoid having to build it from scratch every time a user requests an operation on it. My naïv first idea was to dump the structure (using python's pickle) into a file and load this file for each query. Needless to say (as I figured out), this turns out to be too time-consuming, as the file is rather large. Any ideas how I can easily speed up this thing? Splitting the file into multiple files? Or a program running on the server? (How difficult is this to implement?) Thanks for your help! A: You can dump it in a memory cache (such as memcached). This method has the advantage of cache key invalidation. When underlying data changes you can invalidate your cached data. EDIT Here's the python implementation of memcached: python-memcached. Thanks NicDumZ. A: If you can rebuild your Python runtime with the patches offered in the Unladen Swallow project, you should see speedups of 40% to 150% in pickling, 36% to 56% in unpickling, according to their benchmarks; maybe that might help. A: My suggestion would be not to rely on having an object structure. Instead have a byte array (or mmap'd file etc) which you can do random access operations on and implement the cross-referencing using pointers inside that structure. True, it will introduce the concept of pointers to your code, but it will mean that you don't need to unpickle it each time the handler process starts up, and it will also use a lot less memory (as there won't be the overhead of python objects). As your database is going to be fixed during the lifetime of a handler process (I imagine), you won't need to worry about concurrent modifications or locking etc. Even if you did what you suggest, you shouldn't have to rebuild it on every user request, just keep an instance in memory in your worker process(es), which means it won't take too long to build as you only build it when a new worker process starts. A: The number one way to speed up your web application, especially when you have lots of mostly-static modules, classes and objects that need to be initialized: use a way of serving files that supports serving multiple requests from a single interpreter, such as mod_wsgi, mod_python, SCGI, FastCGI, Google App Engine, a Python web server... basically anything except a standard CGI script that starts a new Python process for every request. With this approach, you can make your data structure a global object that only needs to be read from a serialized format for each new process—which is much less frequent.
How can I speed up a web-application? (Avoid rebuilding a structure.)
After having successfully build a static data structure (see here), I would want to avoid having to build it from scratch every time a user requests an operation on it. My naïv first idea was to dump the structure (using python's pickle) into a file and load this file for each query. Needless to say (as I figured out), this turns out to be too time-consuming, as the file is rather large. Any ideas how I can easily speed up this thing? Splitting the file into multiple files? Or a program running on the server? (How difficult is this to implement?) Thanks for your help!
[ "You can dump it in a memory cache (such as memcached).\nThis method has the advantage of cache key invalidation. When underlying data changes you can invalidate your cached data.\nEDIT\nHere's the python implementation of memcached: python-memcached. Thanks NicDumZ.\n", "If you can rebuild your Python runtime with the patches offered in the Unladen Swallow project, you should see speedups of 40% to 150% in pickling, 36% to 56% in unpickling, according to their benchmarks; maybe that might help. \n", "My suggestion would be not to rely on having an object structure. Instead have a byte array (or mmap'd file etc) which you can do random access operations on and implement the cross-referencing using pointers inside that structure.\nTrue, it will introduce the concept of pointers to your code, but it will mean that you don't need to unpickle it each time the handler process starts up, and it will also use a lot less memory (as there won't be the overhead of python objects).\nAs your database is going to be fixed during the lifetime of a handler process (I imagine), you won't need to worry about concurrent modifications or locking etc.\nEven if you did what you suggest, you shouldn't have to rebuild it on every user request, just keep an instance in memory in your worker process(es), which means it won't take too long to build as you only build it when a new worker process starts.\n", "The number one way to speed up your web application, especially when you have lots of mostly-static modules, classes and objects that need to be initialized: use a way of serving files that supports serving multiple requests from a single interpreter, such as mod_wsgi, mod_python, SCGI, FastCGI, Google App Engine, a Python web server... basically anything except a standard CGI script that starts a new Python process for every request. With this approach, you can make your data structure a global object that only needs to be read from a serialized format for each new process—which is much less frequent.\n" ]
[ 4, 3, 2, 2 ]
[]
[]
[ "apache", "pickle", "python", "web_applications" ]
stackoverflow_0000961981_apache_pickle_python_web_applications.txt
Q: Bazaar: Modify file content before commit via hook? I'm switching from SVN to Bzr for my private projects. There is one feature missing for me, which SVN provides: The replacement of a $Id:$ placeholder with the latest version identification. So far, Bzr provides hooks to do some tricks within the commit process. I've managed to get a list of modified files an manipulate them on the local disk. The problem I encounter is that the snapshot, which is taken from the files that are part of the commit, is made before my modification. The result is, that I have a change of my files, but only local. The workflow I want to build is: Call Bzr commit modify the $Id:$ macro tell bzr that this modified set is the changeset let Bzr do the rest of it's work Any ideas? A: Use this extension: http://launchpad.net/bzr-keywords
Bazaar: Modify file content before commit via hook?
I'm switching from SVN to Bzr for my private projects. There is one feature missing for me, which SVN provides: The replacement of a $Id:$ placeholder with the latest version identification. So far, Bzr provides hooks to do some tricks within the commit process. I've managed to get a list of modified files an manipulate them on the local disk. The problem I encounter is that the snapshot, which is taken from the files that are part of the commit, is made before my modification. The result is, that I have a change of my files, but only local. The workflow I want to build is: Call Bzr commit modify the $Id:$ macro tell bzr that this modified set is the changeset let Bzr do the rest of it's work Any ideas?
[ "Use this extension: http://launchpad.net/bzr-keywords\n" ]
[ 3 ]
[]
[]
[ "bazaar", "bazaar_plugins", "python" ]
stackoverflow_0000962228_bazaar_bazaar_plugins_python.txt
Q: How to call up attributes in python 3.1 -- easy Python 2.5 all you needed to do was type "dir(filename)" and that pulls up the attributes. What is the command in 3.1? A: Just dir(whateverobject). Example: Python 3.1rc1 (r31rc1:73141, Jun 2 2009, 12:50:02) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> x=23 >>> dir(x) ['__abs__', '__add__', '__and__', '__bool__', '__ceil__', '__class__', '__delattr__', '__divmod__', '__doc__', '__eq__', '__float__', '__floor__', '__floordiv__', '__format__', '__ge__', '__getattribute__', '__getnewargs__', '__gt__', '__hash__', '__index__', '__init__', '__int__', '__invert__', '__le__', '__lshift__', '__lt__', '__mod__', '__mul__', '__ne__', '__neg__', '__new__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__round__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__sizeof__', '__str__', '__sub__', '__subclasshook__', '__truediv__', '__trunc__', '__xor__', 'bit_length', 'conjugate', 'denominator', 'imag', 'numerator', 'real'] Please show us exactly what isn't working for you -- give us a chance to help!
How to call up attributes in python 3.1 -- easy
Python 2.5 all you needed to do was type "dir(filename)" and that pulls up the attributes. What is the command in 3.1?
[ "Just dir(whateverobject). Example:\nPython 3.1rc1 (r31rc1:73141, Jun 2 2009, 12:50:02) \n[GCC 4.0.1 (Apple Inc. build 5493)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> x=23\n>>> dir(x)\n['__abs__', '__add__', '__and__', '__bool__', '__ceil__', '__class__', '__delattr__', '__divmod__', '__doc__', '__eq__', '__float__', '__floor__', '__floordiv__', '__format__', '__ge__', '__getattribute__', '__getnewargs__', '__gt__', '__hash__', '__index__', '__init__', '__int__', '__invert__', '__le__', '__lshift__', '__lt__', '__mod__', '__mul__', '__ne__', '__neg__', '__new__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__round__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__sizeof__', '__str__', '__sub__', '__subclasshook__', '__truediv__', '__trunc__', '__xor__', 'bit_length', 'conjugate', 'denominator', 'imag', 'numerator', 'real']\n\nPlease show us exactly what isn't working for you -- give us a chance to help!\n" ]
[ 3 ]
[]
[]
[ "python" ]
stackoverflow_0000962877_python.txt
Q: Python Generator - what not to use it for Just looking at Python generators, real impressed with them, but are there any things not to use them for? I was thinking of past C coding where reading from a file, or user actions would be areas. For example, could the generator be used to prompt the user for input (base data entry?) and the calling function process that input? are there any performance or cleanup issues to be concerned with? A: Generators don't persist well. Generally, you get an error trying to persist a generator object. >>> def generatorForEvenKeys( aDictionary ): for k in aDictionary: if k % 2 == 0: yield aDictionary[k] >>> x = generatorForEvenKeys( someDictionary ) >>> pickle.dump(x,file('temp.dat','wb')) Gets you the following error: TypeError: can't pickle generator objects A: One problem with generators is that they get "consumed." This means that if you need to iterate over the sequence again, you need to create the generator again. If lazy evaluation is an issue, then you probably don't want a generator expression. For example, if you want to perform all your calculations up front (e.g. so that you can release a resource), then a list comprehension or for loop is probably best. If you use psyco, you'll get a significant speed increase for list expressions and for loops, but not for generators. Also rather obviously, if you need to get the length of your sequence up front, then you don't want a generator. A: You use a generator when you want to have something be iterateable, without holding the entire list in memory (this is why xrange supports much longer sequences than range in Python 2.x and lower) When you need to load the whole "list of stuff to yield" into memory, there's not much point in using a generator - you may as well just return a list. For a (slightly contrived) example: def my_pointless_generator(x): thedata = range(x) # or thedata = list(range(x)) in Python 3.x for x in thedata: yield x ..can be rewritten just as efficiently as.. def my_pointless_generator(x): return range(x)
Python Generator - what not to use it for
Just looking at Python generators, real impressed with them, but are there any things not to use them for? I was thinking of past C coding where reading from a file, or user actions would be areas. For example, could the generator be used to prompt the user for input (base data entry?) and the calling function process that input? are there any performance or cleanup issues to be concerned with?
[ "Generators don't persist well.\nGenerally, you get an error trying to persist a generator object.\n>>> def generatorForEvenKeys( aDictionary ):\n for k in aDictionary:\n if k % 2 == 0: yield aDictionary[k]\n\n>>> x = generatorForEvenKeys( someDictionary )\n>>> pickle.dump(x,file('temp.dat','wb'))\n\nGets you the following error:\nTypeError: can't pickle generator objects\n\n", "One problem with generators is that they get \"consumed.\" This means that if you need to iterate over the sequence again, you need to create the generator again.\nIf lazy evaluation is an issue, then you probably don't want a generator expression. For example, if you want to perform all your calculations up front (e.g. so that you can release a resource), then a list comprehension or for loop is probably best.\nIf you use psyco, you'll get a significant speed increase for list expressions and for loops, but not for generators.\nAlso rather obviously, if you need to get the length of your sequence up front, then you don't want a generator.\n", "You use a generator when you want to have something be iterateable, without holding the entire list in memory (this is why xrange supports much longer sequences than range in Python 2.x and lower)\nWhen you need to load the whole \"list of stuff to yield\" into memory, there's not much point in using a generator - you may as well just return a list.\nFor a (slightly contrived) example:\ndef my_pointless_generator(x):\n thedata = range(x) # or thedata = list(range(x)) in Python 3.x\n for x in thedata:\n yield x\n\n..can be rewritten just as efficiently as..\ndef my_pointless_generator(x):\n return range(x)\n\n" ]
[ 13, 12, 1 ]
[]
[]
[ "python" ]
stackoverflow_0000961848_python.txt
Q: Django EmailMultiAlternatives and HTML e-mail display in Outlook 2003 on Win2003 I'm using django.core.mail.EmailMultiAlternatives when sending e-mails from my django app in an attempt to make sure that the message downgrades to text if the e-mail client doesn't support HTML. Here is my send_email method: def send_email(self, from_address, to_list, subject, msg_text, msg_html): subject=subject.replace('\r','').replace('\n',' ') self.msg = EmailMultiAlternatives(subject, msg_text, from_address, to_list) self.msg.attach_alternative(msg_html, "text/html") self.msg.content_subtype = "html" self.msg.send() It works great with Gmail, Hotmail and many other e-mail clients - displaying the HTML content without a problem. But it will not display the HTML content in Outlook 2003 running on Win2003 - just the text version. If I forcefully put the HTML in the EmailMultiAlternatives call, i.e. use msg_html instead of msg_text like so: self.msg = EmailMultiAlternatives(subject, msg_html, from_address, to_list) then it works correctly in all clients; but that means that there is no text fallback for clients that don't support HTML or (more likely) that have disabled support for it. I think it is worth mentioning that the e-mail is being generated on a django app running on Mac OS X (just in case it has to do with line terminator differences between the OSes). I see that people using other languages have had similar problems with outlook... I wonder if anyone has any idea of WHY outlook would behave differently and if there is simple fix that can be applied in my code? A: I don't have an Outlook installation available to test this, so I'm wondering about the reason for the fifth line in your function. self.msg.content_subtype = "html" I don't know much about multipart email internals, but on my system that line causes both parts of the message have a content-type of text/html. Leaving it out produces a message with "Content-Type: text/plain" on the first part and "Content-Type: text/html" on the second. In any case, one of the answers to the question about Java mentions changing the character set to iso-8859-1. I think you should be able to do that with django.core.mail. The EmailMessage class (from which EmailMultiAlternatives inherits) has an attribute named "encoding" which sets the charset to use. By default it's None so the default charset of utf-8 (unless overridden in settings) is used instead. In other words, add something like the following before the send line in the function listed in the question: self.msg.content_subtype = "iso-8859-1" Unfortunately, that will only change the encoding specified on the first part (msg_text in the function above). The function that attaches the alternative content doesn't seem to use the encoding attribute. I'm not sure it's the correct approach but I subclassed EmailMultiAlternatives to override the relevant function and it seemed to work okay. class EmailMultiAlternativesWithEncoding(EmailMultiAlternatives): def _create_attachment(self, filename, content, mimetype=None): """ Converts the filename, content, mimetype triple into a MIME attachment object. Use self.encoding when handling text attachments. """ if mimetype is None: mimetype, _ = mimetypes.guess_type(filename) if mimetype is None: mimetype = DEFAULT_ATTACHMENT_MIME_TYPE basetype, subtype = mimetype.split('/', 1) if basetype == 'text': encoding = self.encoding or settings.DEFAULT_CHARSET attachment = SafeMIMEText(smart_str(content, settings.DEFAULT_CHARSET), subtype, encoding) # original text being replaced above (not last argument) # attachment = SafeMIMEText(smart_str(content, # settings.DEFAULT_CHARSET), subtype, settings.DEFAULT_CHARSET) else: # Encode non-text attachments with base64. attachment = MIMEBase(basetype, subtype) attachment.set_payload(content) Encoders.encode_base64(attachment) if filename: attachment.add_header('Content-Disposition', 'attachment', filename=filename) return attachment I'm not sure if the "smart_str(content, settings.DEFAULT_CHARSET)" part should also reference "encoding" rather than "settings.DEFAULT_CHARSET" but that's the message body handling text is written (django.core.mail.EmailMessage.message). As I said, I don't have Outlook so I can't actually test the Outlook aspect but it does seem to change the charset to iso-8859-1 for both parts.
Django EmailMultiAlternatives and HTML e-mail display in Outlook 2003 on Win2003
I'm using django.core.mail.EmailMultiAlternatives when sending e-mails from my django app in an attempt to make sure that the message downgrades to text if the e-mail client doesn't support HTML. Here is my send_email method: def send_email(self, from_address, to_list, subject, msg_text, msg_html): subject=subject.replace('\r','').replace('\n',' ') self.msg = EmailMultiAlternatives(subject, msg_text, from_address, to_list) self.msg.attach_alternative(msg_html, "text/html") self.msg.content_subtype = "html" self.msg.send() It works great with Gmail, Hotmail and many other e-mail clients - displaying the HTML content without a problem. But it will not display the HTML content in Outlook 2003 running on Win2003 - just the text version. If I forcefully put the HTML in the EmailMultiAlternatives call, i.e. use msg_html instead of msg_text like so: self.msg = EmailMultiAlternatives(subject, msg_html, from_address, to_list) then it works correctly in all clients; but that means that there is no text fallback for clients that don't support HTML or (more likely) that have disabled support for it. I think it is worth mentioning that the e-mail is being generated on a django app running on Mac OS X (just in case it has to do with line terminator differences between the OSes). I see that people using other languages have had similar problems with outlook... I wonder if anyone has any idea of WHY outlook would behave differently and if there is simple fix that can be applied in my code?
[ "I don't have an Outlook installation available to test this, so I'm wondering about the reason for the fifth line in your function.\nself.msg.content_subtype = \"html\"\nI don't know much about multipart email internals, but on my system that line causes both parts of the message have a content-type of text/html. Leaving it out produces a message with \"Content-Type: text/plain\" on the first part and \"Content-Type: text/html\" on the second.\nIn any case, one of the answers to the question about Java mentions changing the character set to iso-8859-1. I think you should be able to do that with django.core.mail.\nThe EmailMessage class (from which EmailMultiAlternatives inherits) has an attribute named \"encoding\" which sets the charset to use. By default it's None so the default charset of utf-8 (unless overridden in settings) is used instead.\nIn other words, add something like the following before the send line in the function listed in the question:\nself.msg.content_subtype = \"iso-8859-1\"\nUnfortunately, that will only change the encoding specified on the first part (msg_text in the function above). The function that attaches the alternative content doesn't seem to use the encoding attribute. I'm not sure it's the correct approach but I subclassed EmailMultiAlternatives to override the relevant function and it seemed to work okay.\nclass EmailMultiAlternativesWithEncoding(EmailMultiAlternatives):\n def _create_attachment(self, filename, content, mimetype=None):\n \"\"\"\n Converts the filename, content, mimetype triple into a MIME attachment\n object. Use self.encoding when handling text attachments.\n \"\"\"\n if mimetype is None:\n mimetype, _ = mimetypes.guess_type(filename)\n if mimetype is None:\n mimetype = DEFAULT_ATTACHMENT_MIME_TYPE\n basetype, subtype = mimetype.split('/', 1)\n if basetype == 'text':\n encoding = self.encoding or settings.DEFAULT_CHARSET\n attachment = SafeMIMEText(smart_str(content,\n settings.DEFAULT_CHARSET), subtype, encoding)\n # original text being replaced above (not last argument)\n # attachment = SafeMIMEText(smart_str(content,\n # settings.DEFAULT_CHARSET), subtype, settings.DEFAULT_CHARSET)\n else:\n # Encode non-text attachments with base64.\n attachment = MIMEBase(basetype, subtype)\n attachment.set_payload(content)\n Encoders.encode_base64(attachment)\n if filename:\n attachment.add_header('Content-Disposition', 'attachment',\n filename=filename)\n return attachment\nI'm not sure if the \"smart_str(content, settings.DEFAULT_CHARSET)\" part should also reference \"encoding\" rather than \"settings.DEFAULT_CHARSET\" but that's the message body handling text is written (django.core.mail.EmailMessage.message).\nAs I said, I don't have Outlook so I can't actually test the Outlook aspect but it does seem to change the charset to iso-8859-1 for both parts.\n" ]
[ 5 ]
[]
[]
[ "django", "email", "html", "python" ]
stackoverflow_0000959985_django_email_html_python.txt
Q: HTML tags within JSON (in Python) I understand its not a desirable circumstance, however if I NEEDED to have some kind of HTML within JSON tags, e.g.: { "node": { "list":"<ul><li class="lists">Hello World</li><ul>" } } is this possible to do in Python without requiring to to be escaped beforehand? It will be a string initially so I was thinking about writing a regular expression to attempt to match and escape these prior to processing, but I just want to make sure there isn't an easier way. A: Well, depending on how varied your HTML is, you can use single quotes in HTML fine, so you could do: { "node": { "list": "<ul><li class='lists'>Hello World</li><ul>" } } However, with simplejson, which is built into Python 2.6 as the json module, it does any escaping you need automatically: >>> import simplejson >>> simplejson.dumps({'node': {'list': '<ul><li class="lists">Hello World</li><ul>'}}) '{"node": {"list": "<ul><li class=\\"lists\\">Hello World</li><ul>"}}' A: You can have arbitrary strings there, including ones which happen to contain HTML tags (the only issue with your example is the inner " which would confuse any parser).
HTML tags within JSON (in Python)
I understand its not a desirable circumstance, however if I NEEDED to have some kind of HTML within JSON tags, e.g.: { "node": { "list":"<ul><li class="lists">Hello World</li><ul>" } } is this possible to do in Python without requiring to to be escaped beforehand? It will be a string initially so I was thinking about writing a regular expression to attempt to match and escape these prior to processing, but I just want to make sure there isn't an easier way.
[ "Well, depending on how varied your HTML is, you can use single quotes in HTML fine, so you could do:\n{\n \"node\":\n {\n \"list\": \"<ul><li class='lists'>Hello World</li><ul>\"\n }\n}\n\nHowever, with simplejson, which is built into Python 2.6 as the json module, it does any escaping you need automatically:\n>>> import simplejson\n>>> simplejson.dumps({'node': {'list': '<ul><li class=\"lists\">Hello World</li><ul>'}})\n'{\"node\": {\"list\": \"<ul><li class=\\\\\"lists\\\\\">Hello World</li><ul>\"}}'\n\n", "You can have arbitrary strings there, including ones which happen to contain HTML tags (the only issue with your example is the inner \" which would confuse any parser).\n" ]
[ 7, 1 ]
[]
[]
[ "escaping", "json", "markup", "python" ]
stackoverflow_0000963448_escaping_json_markup_python.txt
Q: Tab view in CSS with tables I need a tab view in CSS with each tab showing a dynamic table. The complete table is dynamically constructed in loop and only after that should the tabs should hide and show each of the table corresponding to each tab. Any suggestions? The content of the tab is within list item and in loop only. The development is in Django/Python on appspot. The following code does not work for jquery also, is there a problem somewhere? <pre><code> <div id="tabs"> <ul> {% for poolname in poolnamelist %} <li><a href="#mypool{{ forloop.counter }}"> <span>{{ poolname|escape }}</span></a></li> {% endfor %} </ul> {% for poolsequence in sequences %} <div id="mypool{{ forloop.counter }}"> <table> {% for sequence in poolsequence %} <form action="/mypool" method="post"> <tr><td>{{ sequence.seqdate }}</td> <td><input type="submit" value="ChangeDriver"/></td> </tr> </form> {% endfor %} </table> </div> {% endfor %} </div> </code></pre> A: Check out jQuery UI Tabs; this will do what you're looking for. It's not possible to do this using pure CSS. A: Just of the top of my head, check out what some of the Javascript toolkits have to offer. Things like jQuery with a few plugins or Dojo might have something like that in its Dijit library.
Tab view in CSS with tables
I need a tab view in CSS with each tab showing a dynamic table. The complete table is dynamically constructed in loop and only after that should the tabs should hide and show each of the table corresponding to each tab. Any suggestions? The content of the tab is within list item and in loop only. The development is in Django/Python on appspot. The following code does not work for jquery also, is there a problem somewhere? <pre><code> <div id="tabs"> <ul> {% for poolname in poolnamelist %} <li><a href="#mypool{{ forloop.counter }}"> <span>{{ poolname|escape }}</span></a></li> {% endfor %} </ul> {% for poolsequence in sequences %} <div id="mypool{{ forloop.counter }}"> <table> {% for sequence in poolsequence %} <form action="/mypool" method="post"> <tr><td>{{ sequence.seqdate }}</td> <td><input type="submit" value="ChangeDriver"/></td> </tr> </form> {% endfor %} </table> </div> {% endfor %} </div> </code></pre>
[ "Check out jQuery UI Tabs; this will do what you're looking for. It's not possible to do this using pure CSS.\n", "Just of the top of my head, check out what some of the Javascript toolkits have to offer. Things like jQuery with a few plugins or Dojo might have something like that in its Dijit library.\n" ]
[ 1, 0 ]
[]
[]
[ "css", "css_tables", "html", "python", "tabs" ]
stackoverflow_0000963506_css_css_tables_html_python_tabs.txt
Q: How can I instantiate a comment element programatically using lxml? I'm using lxml to programatically build HTML and I need to include a custom comment in the output. Whilst there is code in lxml to cope with comments (they can be instantiated when parsing existing HTML code) I cannot find a way to instantiate one programatically. Can anyone help? A: You can use the lxml.etree.Comment() factory function. It will return a comment element that you can use like any other element.
How can I instantiate a comment element programatically using lxml?
I'm using lxml to programatically build HTML and I need to include a custom comment in the output. Whilst there is code in lxml to cope with comments (they can be instantiated when parsing existing HTML code) I cannot find a way to instantiate one programatically. Can anyone help?
[ "You can use the lxml.etree.Comment() factory function. It will return a comment element that you can use like any other element.\n" ]
[ 6 ]
[]
[]
[ "html", "lxml", "python", "xml" ]
stackoverflow_0000963621_html_lxml_python_xml.txt
Q: AJAX console window with ANSI/VT100 support? I'm planning to write gateway web application, which would need "terminal window" with VT100/ANSI escape code support. Are there any AJAX based alternatives for such a task? I'm thinking something like this: http://tryruby.hobix.com/ My preferred backend for the system is Python/Twisted/Pylons, but since I'm just planning, I will explore every option. A: Try AnyTerm AjaxTerm WebShell A: There's also Shell In A Box. A: AjaxTerm has a terminal, with mostly felicitous terminal emulation, done on the Python backend (it just pushes display updates to the client Javascript). The AjaxTerm website has been down for some time, but you can still find it packaged in Debian.
AJAX console window with ANSI/VT100 support?
I'm planning to write gateway web application, which would need "terminal window" with VT100/ANSI escape code support. Are there any AJAX based alternatives for such a task? I'm thinking something like this: http://tryruby.hobix.com/ My preferred backend for the system is Python/Twisted/Pylons, but since I'm just planning, I will explore every option.
[ "Try\nAnyTerm\nAjaxTerm\nWebShell\n", "There's also Shell In A Box.\n", "AjaxTerm has a terminal, with mostly felicitous terminal emulation, done on the Python backend (it just pushes display updates to the client Javascript).\nThe AjaxTerm website has been down for some time, but you can still find it packaged in Debian.\n" ]
[ 9, 7, 3 ]
[]
[]
[ "ajax", "python", "vt100" ]
stackoverflow_0000244750_ajax_python_vt100.txt
Q: Django models: how to return a default value in case of a non-existing foreign-key relationship? I am developing a vocabulary training program with Django (German-Swedish). The app's vocabulary data consists of a large number of "vocabulary cards", each of which contains one or more German words or terms that correspond to one or more Swedish terms. Training is only available for registered users, because the app keeps track of the user's performance by saving a score for each vocabulary card. Vocabulary cards have a level (basic, advanced, expert) and any number of tags assigned to them. When a registered user starts a training, the application needs to calculate the user's average scores for each of the levels and tags, so he can make his selection. I have solved this problem by introducing a model named CardByUser that has a score and field and ForeignKey relationships to the models User and Card. Now I can use Django's aggregation function calculate the average scores. The big disadvantage: this works only if there is a CardByUser instance for each and every Card instance that currently exists in the DB, even if the user has only trained 100 cards. My current solution is to create all those CardByUser instances on Card creation and when a user is registered. This is, of course, rather ineficient both in terms of data base memory and of computing time (registering a user takes quite a while). And it seems quite inelegant, which kind of bugs me the most. Is there a better way to do this? Maybe it is possible to tell Django the following when calculating the average score for a Card: If a CardByUser for the given Card and User exists, use its score. If the CardByUser doesn't exist, use a default value --> the score 0. Can this be done? If so, how? Edit: Clarification Thanks S.Lott's for the first answer, but I think that my problem is a bit more complicated. My bad, I'm trying to clarify using some actual code from my models. class Card(models.Model): entry_sv = models.CharField(max_length=200) entry_de = models.CharField(max_length=200) ... more fields ... class CardByUser(models.Model): user = models.ForeignKey(User) card = models.ForeignKey(Card, related_name="user_cards") score = models.IntegerField(default=0) ... more fields ... This means many CardByUser objects are related to a single Card. Now in my view code, I need to create a queryset of CardByUser objects that fulfill the following criteria: the related Card object's tag field contains a certain string (I now that's not optimal either, but not the focus of my question...) the user is the current user Then I can aggregate over the scores. My current code looks like this (shortened) : user_cards = CardByUser.objects.filter(user=current_user) .filter(card__tags__contains=tag.name) avg = user_cards_agg.aggregate(Avg('score'))['score__avg'] If a CardByUser for the current user and Card does not exist, it will simply not be included in the aggregation. That's why I create all those CardByUsers with a score of 0. So how could I get rid of those? Any ideas would be appreciated! A: This is what methods (and perhaps properties) are for. class OptionalFKWithDefault( models.Model ): another = models.ForeignKey( AnotherModel, blank=True, null=True ) @property def another_score( self ): if self.another is None: return 0 else: return self.another.score A: This may not be entirely related to your question, but it looks like CardByUser really should be a many-to-many relationship with an extra field. (see http://docs.djangoproject.com/en/dev/topics/db/models/#extra-fields-on-many-to-many-relationships) Maybe you could alter your model this way? class Card(models.Model): entry_sv = models.CharField(max_length=200) entry_de = models.CharField(max_length=200) ... more fields ... users = models.ManyToManyField(User, through='CardByUser') class CardByUser(models.Model): user = models.ForeignKey(User) card = models.ForeignKey(Card) score = models.IntegerField(default=0) Then you won't have to explicitely create CardByUser objects, as this is all taken care of by Django. You should be able to simplify your aggregation query as well: user_cards = Card.objects.filter(users=current_user) .filter(tags__contains=tag.name) ...
Django models: how to return a default value in case of a non-existing foreign-key relationship?
I am developing a vocabulary training program with Django (German-Swedish). The app's vocabulary data consists of a large number of "vocabulary cards", each of which contains one or more German words or terms that correspond to one or more Swedish terms. Training is only available for registered users, because the app keeps track of the user's performance by saving a score for each vocabulary card. Vocabulary cards have a level (basic, advanced, expert) and any number of tags assigned to them. When a registered user starts a training, the application needs to calculate the user's average scores for each of the levels and tags, so he can make his selection. I have solved this problem by introducing a model named CardByUser that has a score and field and ForeignKey relationships to the models User and Card. Now I can use Django's aggregation function calculate the average scores. The big disadvantage: this works only if there is a CardByUser instance for each and every Card instance that currently exists in the DB, even if the user has only trained 100 cards. My current solution is to create all those CardByUser instances on Card creation and when a user is registered. This is, of course, rather ineficient both in terms of data base memory and of computing time (registering a user takes quite a while). And it seems quite inelegant, which kind of bugs me the most. Is there a better way to do this? Maybe it is possible to tell Django the following when calculating the average score for a Card: If a CardByUser for the given Card and User exists, use its score. If the CardByUser doesn't exist, use a default value --> the score 0. Can this be done? If so, how? Edit: Clarification Thanks S.Lott's for the first answer, but I think that my problem is a bit more complicated. My bad, I'm trying to clarify using some actual code from my models. class Card(models.Model): entry_sv = models.CharField(max_length=200) entry_de = models.CharField(max_length=200) ... more fields ... class CardByUser(models.Model): user = models.ForeignKey(User) card = models.ForeignKey(Card, related_name="user_cards") score = models.IntegerField(default=0) ... more fields ... This means many CardByUser objects are related to a single Card. Now in my view code, I need to create a queryset of CardByUser objects that fulfill the following criteria: the related Card object's tag field contains a certain string (I now that's not optimal either, but not the focus of my question...) the user is the current user Then I can aggregate over the scores. My current code looks like this (shortened) : user_cards = CardByUser.objects.filter(user=current_user) .filter(card__tags__contains=tag.name) avg = user_cards_agg.aggregate(Avg('score'))['score__avg'] If a CardByUser for the current user and Card does not exist, it will simply not be included in the aggregation. That's why I create all those CardByUsers with a score of 0. So how could I get rid of those? Any ideas would be appreciated!
[ "This is what methods (and perhaps properties) are for.\nclass OptionalFKWithDefault( models.Model ):\n another = models.ForeignKey( AnotherModel, blank=True, null=True )\n @property\n def another_score( self ):\n if self.another is None:\n return 0\n else:\n return self.another.score\n\n", "This may not be entirely related to your question, but it looks like CardByUser really should be a many-to-many relationship with an extra field. (see http://docs.djangoproject.com/en/dev/topics/db/models/#extra-fields-on-many-to-many-relationships)\nMaybe you could alter your model this way?\nclass Card(models.Model):\n entry_sv = models.CharField(max_length=200)\n entry_de = models.CharField(max_length=200)\n ... more fields ...\n users = models.ManyToManyField(User, through='CardByUser')\n\nclass CardByUser(models.Model):\n user = models.ForeignKey(User)\n card = models.ForeignKey(Card)\n score = models.IntegerField(default=0)\n\nThen you won't have to explicitely create CardByUser objects, as this is all taken care of by Django. You should be able to simplify your aggregation query as well:\nuser_cards = Card.objects.filter(users=current_user)\n .filter(tags__contains=tag.name)\n...\n\n" ]
[ 2, 1 ]
[]
[]
[ "aggregation", "django_models", "python" ]
stackoverflow_0000955815_aggregation_django_models_python.txt
Q: Zipping dynamic files in App Engine (Python) Is there anyway I can zip dynamically generated content, such as a freshly rendered html template, into a zip file using zipfile? There seem to be some examples around for zipping static content, but none for zipping dynamic ones. Or, is it not possible at all? One more question: Is it possible to create a zip file with a bunch of sub-folders inside it? Thanks. A: The working code: (for app engine:) output = StringIO.StringIO() z = zipfile.ZipFile(output,'w') my_data = "<html><body><p>Hello, world!</p></body></html>" z.writestr("hello.html", my_data) z.close() self.response.headers["Content-Type"] = "multipart/x-zip" self.response.headers['Content-Disposition'] = "attachment; filename=test.zip" self.response.out.write(output.getvalue()) Thanks again to Schnouki and Ryan. A: You can add whatever you want to a zip file using ZipFile.writestr(): my_data = "<html><body><p>Hello, world!</p></body></html>" z.writestr("hello.html", my_data) You can also use sub-folders using / (or os.sep) as a separator: z.writestr("site/foo/hello/index.html", my_data) A: In addition to Schnouki's excellent answer, you can also pass ZipFile a file-like object, such as one created by StringIO.StringIO.
Zipping dynamic files in App Engine (Python)
Is there anyway I can zip dynamically generated content, such as a freshly rendered html template, into a zip file using zipfile? There seem to be some examples around for zipping static content, but none for zipping dynamic ones. Or, is it not possible at all? One more question: Is it possible to create a zip file with a bunch of sub-folders inside it? Thanks.
[ "The working code: (for app engine:)\noutput = StringIO.StringIO()\nz = zipfile.ZipFile(output,'w')\nmy_data = \"<html><body><p>Hello, world!</p></body></html>\"\nz.writestr(\"hello.html\", my_data)\nz.close()\n\nself.response.headers[\"Content-Type\"] = \"multipart/x-zip\"\nself.response.headers['Content-Disposition'] = \"attachment; filename=test.zip\"\nself.response.out.write(output.getvalue())\n\nThanks again to Schnouki and Ryan.\n", "You can add whatever you want to a zip file using ZipFile.writestr():\nmy_data = \"<html><body><p>Hello, world!</p></body></html>\"\nz.writestr(\"hello.html\", my_data)\n\nYou can also use sub-folders using / (or os.sep) as a separator:\nz.writestr(\"site/foo/hello/index.html\", my_data)\n\n", "In addition to Schnouki's excellent answer, you can also pass ZipFile a file-like object, such as one created by StringIO.StringIO.\n" ]
[ 14, 7, 3 ]
[]
[]
[ "google_app_engine", "python", "zip" ]
stackoverflow_0000963800_google_app_engine_python_zip.txt
Q: Patching classes in Python Suppose I have a Python class that I want to add an extra property to. Is there any difference between import path.MyClass MyClass.foo = bar and using something like : import path.MyClass setattr(MyClass, 'foo', bar) ? If not, why do people seem to do the second rather than the first? (Eg. here http://concisionandconcinnity.blogspot.com/2008/10/chaining-monkey-patches-in-python.html ) A: The statements are equivalent, but setattr might be used because it's the most dynamic choice of the two (with setattr you can use a variable for the attribute name.) See: http://docs.python.org/library/functions.html#setattr
Patching classes in Python
Suppose I have a Python class that I want to add an extra property to. Is there any difference between import path.MyClass MyClass.foo = bar and using something like : import path.MyClass setattr(MyClass, 'foo', bar) ? If not, why do people seem to do the second rather than the first? (Eg. here http://concisionandconcinnity.blogspot.com/2008/10/chaining-monkey-patches-in-python.html )
[ "The statements are equivalent, but setattr might be used because it's the most dynamic choice of the two (with setattr you can use a variable for the attribute name.)\nSee: http://docs.python.org/library/functions.html#setattr\n" ]
[ 11 ]
[]
[]
[ "class", "monkeypatching", "python" ]
stackoverflow_0000964532_class_monkeypatching_python.txt
Q: Find a HAL object based on /dev node path I'm using python-dbus to interface with HAL, and I need to find a device's UDI based on it's path in the /dev hierarchy. So given a path such as /dev/sdb, I want to get a value back like /org/freedesktop/Hal/devices/usb_device_10. A: Pure python solution: import dbus bus = dbus.SystemBus() obj = bus.get_object("org.freedesktop.Hal", "/org/freedesktop/Hal/Manager") iface = dbus.Interface(obj, "org.freedesktop.Hal.Manager") print iface.FindDeviceStringMatch("block.device", "/dev/sda") A: I would spawn a hal-find-by-property call from Python: import subprocess def get_UDI(path): cmd = 'hal-find-by-property --key block.device --string %s' % path proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) output = proc.communicate() # stdout return output[0].strip() print get_UDI('/dev/sdb') # /org/freedesktop/Hal/devices/xxxxxx
Find a HAL object based on /dev node path
I'm using python-dbus to interface with HAL, and I need to find a device's UDI based on it's path in the /dev hierarchy. So given a path such as /dev/sdb, I want to get a value back like /org/freedesktop/Hal/devices/usb_device_10.
[ "Pure python solution:\nimport dbus\nbus = dbus.SystemBus()\nobj = bus.get_object(\"org.freedesktop.Hal\", \"/org/freedesktop/Hal/Manager\")\niface = dbus.Interface(obj, \"org.freedesktop.Hal.Manager\")\nprint iface.FindDeviceStringMatch(\"block.device\", \"/dev/sda\")\n\n", "I would spawn a hal-find-by-property call from Python:\nimport subprocess\ndef get_UDI(path):\n cmd = 'hal-find-by-property --key block.device --string %s' % path\n proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)\n output = proc.communicate()\n # stdout\n return output[0].strip()\n\nprint get_UDI('/dev/sdb') # /org/freedesktop/Hal/devices/xxxxxx\n\n" ]
[ 3, 1 ]
[]
[]
[ "dbus", "hal", "python" ]
stackoverflow_0000964801_dbus_hal_python.txt
Q: Python: Reading part of a text file HI all I'm new to python and programming. I need to read in chunks of a large text file, format looks like the following: <word id="8" form="hibernis" lemma="hibernus1" postag="n-p---nb-" head-"7" relation="ADV"/> I need the form, lemma and postag information. e.g. for above I need hibernis, hibernus1 and n-p---nb-. How do I tell python to read until it reaches form, to read forward until it reaches the quote mark " and then read the information between the quote marks "hibernis"? Really struggling with this. My attempts so far have been to remove the punctuation, split the sentence and then pull the info I need from a list. Having trouble getting python to iterate over whole file though, I can only get this working for 1 line. My code is below: f=open('blank.txt','r') quotes=f.read() noquotes=quotes.replace('"','') f.close() rf=open('blank.txt','w') rf.write(noquotes) rf.close() f=open('blank.txt','r') finished = False postag=[] while not finished: line=f.readline() words=line.split() postag.append(words[4]) postag.append(words[6]) postag.append(words[8]) finished=True Would appreciate any feedback/criticisms thanks A: If it's XML, use ElementTree to parse it: from xml.etree import ElementTree line = '<word id="8" form="hibernis" lemma="hibernus1" postag="n-p---nb-" head="7" relation="ADV"/>' element = ElementTree.fromstring(line) For each XML element you can easily extract the name and all the attributes: >>> element.tag 'word' >>> element.attrib {'head': '7', 'form': 'hibernis', 'postag': 'n-p---nb-', 'lemma': 'hibernus1', 'relation': 'ADV', 'id': '8'} So if you have a document with a bunch of word XML elements, something like this will extract the information you want from each one: from xml.etree import ElementTree XML = ''' <words> <word id="8" form="hibernis" lemma="hibernus1" postag="n-p---nb-" head="7" relation="ADV"/> </words>''' root = ElementTree.fromstring(XML) for element in root.findall('word'): form = element.attrib['form'] lemma = element.attrib['lemma'] postag = element.attrib['postag'] print form, lemma, postag Use parse() instead of fromstring() if you only have a filename. A: I'd suggest using the regular expression module: re Something along these lines perhaps? #!/usr/bin/python import re if __name__ == '__main__': data = open('x').read() RE = re.compile('.*form="(.*)" lemma="(.*)" postag="(.*?)"', re.M) matches = RE.findall(data) for m in matches: print m This does assume that the <word ...> lines are each on a single line and that each part is in that exact order, and that you don't need to deal with full xml parsing. A: Is your file proper XML? If so, try a SAX parser: import xml.sax class Handler (xml.sax.ContentHandler): def startElement (self, tag, attrs): if tag == 'word': print 'form=', attrs['form'] print 'lemma=',attrs['lemma'] print 'postag=',attrs['postag'] ch = Handler () f = open ('myfile') xml.sax.parse (f, ch) (this is rough .. it may not be entirely correct). A: In addition to the usual RegEx answer, since this appears to be a form of XML, you might try something like BeautifulSoup ( http://www.crummy.com/software/BeautifulSoup/ ) It's very easy to use, and find tags/attributes in things like HTML/XML, even if they're not "well formed". Might be worth a look. A: Parsing xml by hand is usually the wrong thing. For one thing, your code will break if there's an escaped quote in any of the attributes. Getting the attributes from an xml parser is probably cleaner and less error-prone. An approach like this can also run into problems parsing the entire file if you have lines that don't match the format. You can deal with this either by creating a parseline method (something like def parse (line): try: return parsed values here except: You can also simplify this with filter and map functions: lines = filter( lambda line: parseable(line), f.readlines()) values = map (parse, lines) A: Just to highlight your problem: finished = False counter = 0 while not finished: counter += 1 finished=True print counter A: With regular expressions, this is the gist (you can do the file.readline() part): import re line = '<word id="8" form="hibernis" lemma="hibernus1" postag="n-p---nb-" head-"7" relation="ADV"/>' r = re.compile( 'form="([^"]*)".*lemma="([^"]*)".*postag="([^"]*)"' ) match = r.search( line ) print match.groups() >>> ('hibernis', 'hibernus1', 'n-p---nb-') >>> A: First, don't spend a lot of time rewriting your file. It's generally a waste of time. The processing to clean up and parse the tags is so fast, that you'll be perfectly happy working from the source file all the time. source= open( "blank.txt", "r" ) for line in source: # line has a tag-line structure # <word id="8" form="hibernis" lemma="hibernus1" postag="n-p---nb-" head-"7" relation="ADV"/> # Assumption -- no spaces in the quoted strings. parts = line.split() # parts is [ '<word', 'id="8"', 'form="hibernis"', ... ] assert parts[0] == "<word" nameValueList = [ part.partition('=') for part in parts[1:] ] # nameValueList is [ ('id','=','"8"'), ('form','=','"hibernis"'), ... ] attrs = dict( (n,eval(v)) for n, _, v in nameValueList ) # attrs is { 'id':'8', 'form':'hibernis', ... } print attrs['form'], attrs['lemma'], attrs['posttag'] A: wow, you guys are fast :) If you want all attributes of a list (and the ordering is known), then you can use something like this: import re print re.findall('"(.+?)"',INPUT) INPUT is a line like: <word id="8" form="hibernis" lemma="hibernus1" postag="n-p---nb-" head="7" relation="ADV"/> and the printed list is: ['8', 'hibernis', 'hibernus1', 'n-p---nb-', '7', 'ADV']
Python: Reading part of a text file
HI all I'm new to python and programming. I need to read in chunks of a large text file, format looks like the following: <word id="8" form="hibernis" lemma="hibernus1" postag="n-p---nb-" head-"7" relation="ADV"/> I need the form, lemma and postag information. e.g. for above I need hibernis, hibernus1 and n-p---nb-. How do I tell python to read until it reaches form, to read forward until it reaches the quote mark " and then read the information between the quote marks "hibernis"? Really struggling with this. My attempts so far have been to remove the punctuation, split the sentence and then pull the info I need from a list. Having trouble getting python to iterate over whole file though, I can only get this working for 1 line. My code is below: f=open('blank.txt','r') quotes=f.read() noquotes=quotes.replace('"','') f.close() rf=open('blank.txt','w') rf.write(noquotes) rf.close() f=open('blank.txt','r') finished = False postag=[] while not finished: line=f.readline() words=line.split() postag.append(words[4]) postag.append(words[6]) postag.append(words[8]) finished=True Would appreciate any feedback/criticisms thanks
[ "If it's XML, use ElementTree to parse it:\nfrom xml.etree import ElementTree\n\nline = '<word id=\"8\" form=\"hibernis\" lemma=\"hibernus1\" postag=\"n-p---nb-\" head=\"7\" relation=\"ADV\"/>'\n\nelement = ElementTree.fromstring(line)\n\nFor each XML element you can easily extract the name and all the attributes:\n>>> element.tag\n'word'\n>>> element.attrib\n{'head': '7', 'form': 'hibernis', 'postag': 'n-p---nb-', 'lemma': 'hibernus1', 'relation': 'ADV', 'id': '8'}\n\nSo if you have a document with a bunch of word XML elements, something like this will extract the information you want from each one:\nfrom xml.etree import ElementTree\n\nXML = '''\n<words>\n <word id=\"8\" form=\"hibernis\" lemma=\"hibernus1\" postag=\"n-p---nb-\" head=\"7\" relation=\"ADV\"/>\n</words>'''\n\nroot = ElementTree.fromstring(XML)\n\nfor element in root.findall('word'):\n form = element.attrib['form']\n lemma = element.attrib['lemma']\n postag = element.attrib['postag']\n\n print form, lemma, postag\n\nUse parse() instead of fromstring() if you only have a filename.\n", "I'd suggest using the regular expression module: re\nSomething along these lines perhaps?\n#!/usr/bin/python\nimport re\n\nif __name__ == '__main__':\n data = open('x').read()\n RE = re.compile('.*form=\"(.*)\" lemma=\"(.*)\" postag=\"(.*?)\"', re.M)\n matches = RE.findall(data)\n for m in matches:\n print m\n\nThis does assume that the <word ...> lines are each on a single line and that each part is in that exact order, and that you don't need to deal with full xml parsing.\n", "Is your file proper XML? If so, try a SAX parser:\nimport xml.sax\nclass Handler (xml.sax.ContentHandler):\n def startElement (self, tag, attrs):\n if tag == 'word':\n print 'form=', attrs['form']\n print 'lemma=',attrs['lemma']\n print 'postag=',attrs['postag']\n\nch = Handler ()\nf = open ('myfile')\nxml.sax.parse (f, ch)\n\n(this is rough .. it may not be entirely correct).\n", "In addition to the usual RegEx answer, since this appears to be a form of XML, you might try something like BeautifulSoup ( http://www.crummy.com/software/BeautifulSoup/ )\nIt's very easy to use, and find tags/attributes in things like HTML/XML, even if they're not \"well formed\". Might be worth a look. \n", "Parsing xml by hand is usually the\n wrong thing. For one thing, your code\n will break if there's an escaped\n quote in any of the attributes.\n Getting the attributes from an xml\n parser is probably cleaner and less\n error-prone.\nAn approach like this can also run into problems parsing the entire file if you have lines that don't match the format. You can deal with this either by creating a parseline method (something like\ndef parse (line):\n try: \n return parsed values here\n except: \n\nYou can also simplify this with filter and map functions:\nlines = filter( lambda line: parseable(line), f.readlines())\nvalues = map (parse, lines)\n\n", "Just to highlight your problem:\nfinished = False\ncounter = 0\nwhile not finished:\n counter += 1\n finished=True\nprint counter\n\n", "With regular expressions, this is the gist (you can do the file.readline() part):\nimport re\nline = '<word id=\"8\" form=\"hibernis\" lemma=\"hibernus1\" postag=\"n-p---nb-\" head-\"7\" relation=\"ADV\"/>'\nr = re.compile( 'form=\"([^\"]*)\".*lemma=\"([^\"]*)\".*postag=\"([^\"]*)\"' )\nmatch = r.search( line )\nprint match.groups()\n\n>>> \n('hibernis', 'hibernus1', 'n-p---nb-')\n>>> \n\n", "First, don't spend a lot of time rewriting your file. It's generally a waste of time. The processing to clean up and parse the tags is so fast, that you'll be perfectly happy working from the source file all the time.\nsource= open( \"blank.txt\", \"r\" )\nfor line in source:\n # line has a tag-line structure\n # <word id=\"8\" form=\"hibernis\" lemma=\"hibernus1\" postag=\"n-p---nb-\" head-\"7\" relation=\"ADV\"/>\n # Assumption -- no spaces in the quoted strings.\n parts = line.split()\n # parts is [ '<word', 'id=\"8\"', 'form=\"hibernis\"', ... ]\n assert parts[0] == \"<word\"\n nameValueList = [ part.partition('=') for part in parts[1:] ]\n # nameValueList is [ ('id','=','\"8\"'), ('form','=','\"hibernis\"'), ... ]\n attrs = dict( (n,eval(v)) for n, _, v in nameValueList )\n # attrs is { 'id':'8', 'form':'hibernis', ... }\n print attrs['form'], attrs['lemma'], attrs['posttag']\n\n", "wow, you guys are fast :)\nIf you want all attributes of a list (and the ordering is known), then you can use something like this:\nimport re\nprint re.findall('\"(.+?)\"',INPUT)\n\nINPUT is a line like:\n<word id=\"8\" form=\"hibernis\" lemma=\"hibernus1\" postag=\"n-p---nb-\" head=\"7\" relation=\"ADV\"/>\n\nand the printed list is:\n['8', 'hibernis', 'hibernus1', 'n-p---nb-', '7', 'ADV']\n\n" ]
[ 5, 2, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0000964993_python.txt
Q: Python - simple reading lines from a pipe I'm trying to read lines from a pipe and process them, but I'm doing something silly and I can't figure out what. The producer is going to keep producing lines indefinitely, like this: producer.py import time while True: print 'Data' time.sleep(1) The consumer just needs to check for lines periodically: consumer.py import sys, time while True: line = sys.stdin.readline() if line: print 'Got data:', line else: time.sleep(1) When I run this in the Windows shell as python producer.py | python consumer.py, it just sleeps forever (never seems to get data?) It seems that maybe the problem is that the producer never terminates, since if I send a finite amount of data then it works fine. How can I get the data to be received and show up for the consumer? In the real application, the producer is a C++ program I have no control over. A: Some old versions of Windows simulated pipes through files (so they were prone to such problems), but that hasn't been a problem in 10+ years. Try adding a sys.stdout.flush() to the producer after the print, and also try to make the producer's stdout unbuffered (by using python -u). Of course this doesn't help if you have no control over the producer -- if it buffers too much of its output you're still going to wait a long time. Unfortunately - while there are many approaches to solve that problem on Unix-like operating systems, such as pyexpect, pexpect, exscript, and paramiko, I doubt any of them works on Windows; if that's indeed the case, I'd try Cygwin, which puts enough of a Linux-like veneer on Windows as to often enable the use of Linux-like approaches on a Windows box. A: This is about I/O that is bufferized by default with Python. Pass -u option to the interpreter to disable this behavior: python -u producer.py | python consumer.py It fixes the problem for me.
Python - simple reading lines from a pipe
I'm trying to read lines from a pipe and process them, but I'm doing something silly and I can't figure out what. The producer is going to keep producing lines indefinitely, like this: producer.py import time while True: print 'Data' time.sleep(1) The consumer just needs to check for lines periodically: consumer.py import sys, time while True: line = sys.stdin.readline() if line: print 'Got data:', line else: time.sleep(1) When I run this in the Windows shell as python producer.py | python consumer.py, it just sleeps forever (never seems to get data?) It seems that maybe the problem is that the producer never terminates, since if I send a finite amount of data then it works fine. How can I get the data to be received and show up for the consumer? In the real application, the producer is a C++ program I have no control over.
[ "Some old versions of Windows simulated pipes through files (so they were prone to such problems), but that hasn't been a problem in 10+ years. Try adding a\n sys.stdout.flush()\n\nto the producer after the print, and also try to make the producer's stdout unbuffered (by using python -u).\nOf course this doesn't help if you have no control over the producer -- if it buffers too much of its output you're still going to wait a long time.\nUnfortunately - while there are many approaches to solve that problem on Unix-like operating systems, such as pyexpect, pexpect, exscript, and paramiko, I doubt any of them works on Windows; if that's indeed the case, I'd try Cygwin, which puts enough of a Linux-like veneer on Windows as to often enable the use of Linux-like approaches on a Windows box.\n", "This is about I/O that is bufferized by default with Python. Pass -u option to the interpreter to disable this behavior:\npython -u producer.py | python consumer.py\n\nIt fixes the problem for me.\n" ]
[ 15, 7 ]
[]
[]
[ "pipe", "producer_consumer", "python" ]
stackoverflow_0000965210_pipe_producer_consumer_python.txt
Q: Python App Engine projects with sophisticated user-role-permission structures In followup to an earlier question, I'd be interested to know whether anyone can recommend some open-source Python-based Google App Engine projects with complex user-role-permission models to consult as a reference. A link to the code would be nice. In my own project, I'd like to add a layer of organizations in addition to the usual roles and permissions, e.g., users are members of one ore more organizations, and their roles are relative to the organizations. A lot like an issue tracker where there is a many-to-many relationship between users and projects. A: App-Engine-Patch ports the django permission model over to AppEngine. Scroll down to the Permissions section of this page: http://code.google.com/p/app-engine-patch/wiki/GettingStarted. The source code is available from that site as well.
Python App Engine projects with sophisticated user-role-permission structures
In followup to an earlier question, I'd be interested to know whether anyone can recommend some open-source Python-based Google App Engine projects with complex user-role-permission models to consult as a reference. A link to the code would be nice. In my own project, I'd like to add a layer of organizations in addition to the usual roles and permissions, e.g., users are members of one ore more organizations, and their roles are relative to the organizations. A lot like an issue tracker where there is a many-to-many relationship between users and projects.
[ "App-Engine-Patch ports the django permission model over to AppEngine. Scroll down to the Permissions section of this page: http://code.google.com/p/app-engine-patch/wiki/GettingStarted. The source code is available from that site as well.\n" ]
[ 3 ]
[]
[]
[ "google_app_engine", "python", "roles" ]
stackoverflow_0000960125_google_app_engine_python_roles.txt
Q: Nokia N95 and PyS60 with the sensor and xprofile modules I've made a python script which should modify the profile of the phone based on the phone position. Runned under ScriptShell it works great. The problem is that it hangs, both with the "sis" script runned upon "boot up", as well as without it. So my question is what is wrong with the code, and also whether I need to pass special parameters to ensymble? import appuifw, e32, sensor, xprofile from appuifw import * old_profil = xprofile.get_ap() def get_sensor_data(status): #decide profile def exit_key_handler(): # Disconnect from the sensor and exit acc_sensor.disconnect() app_lock.signal() app_lock = e32.Ao_lock() appuifw.app.exit_key_handler = exit_key_handler appuifw.app.title = u"Acc Silent" appuifw.app.menu = [(u'Close', app_lock.signal)] appuifw.app.body = Canvas() # Retrieve the acceleration sensor sensor_type= sensor.sensors()['AccSensor'] # Create an acceleration sensor object acc_sensor= sensor.Sensor(sensor_type['id'],sensor_type['category']) # Connect to the sensor acc_sensor.connect(get_sensor_data) # Wait for sensor data and the exit event app_lock.wait() The script starts at boot, using ensymble and my developer certificate. Thanks in advance A: I often use something like that at the top of my scripts: import os.path, sys PY_PATH = None for p in ['c:\\Data\\Python', 'e:\\Data\\Python','c:\\Python','e:\\Python']: if os.path.exists(p): PY_PATH = p break if PY_PATH and PY_PATH not in sys.path: sys.path.append(PY_PATH) A: xprofile is not a standard library, make sure you add path to it. My guess is that when run as SIS, it doesn't find xprofile and hangs up. When releasing your SIS, either instruct that users install that separately or include inside your SIS. Where would you have it installed, use that path. Here's python default directory as sample: # PyS60 1.9.x and above sys.path.append('c:\\Data\\Python') sys.path.append('e:\\Data\\Python') # Pys60 1.4.x or below sys.path.append('c:\\Python') sys.path.append('e:\\Python') Btw make clean exit, do this: appuifw.app.menu = [(u'Close', exit_key_handler)]
Nokia N95 and PyS60 with the sensor and xprofile modules
I've made a python script which should modify the profile of the phone based on the phone position. Runned under ScriptShell it works great. The problem is that it hangs, both with the "sis" script runned upon "boot up", as well as without it. So my question is what is wrong with the code, and also whether I need to pass special parameters to ensymble? import appuifw, e32, sensor, xprofile from appuifw import * old_profil = xprofile.get_ap() def get_sensor_data(status): #decide profile def exit_key_handler(): # Disconnect from the sensor and exit acc_sensor.disconnect() app_lock.signal() app_lock = e32.Ao_lock() appuifw.app.exit_key_handler = exit_key_handler appuifw.app.title = u"Acc Silent" appuifw.app.menu = [(u'Close', app_lock.signal)] appuifw.app.body = Canvas() # Retrieve the acceleration sensor sensor_type= sensor.sensors()['AccSensor'] # Create an acceleration sensor object acc_sensor= sensor.Sensor(sensor_type['id'],sensor_type['category']) # Connect to the sensor acc_sensor.connect(get_sensor_data) # Wait for sensor data and the exit event app_lock.wait() The script starts at boot, using ensymble and my developer certificate. Thanks in advance
[ "I often use something like that at the top of my scripts:\nimport os.path, sys\nPY_PATH = None\nfor p in ['c:\\\\Data\\\\Python', 'e:\\\\Data\\\\Python','c:\\\\Python','e:\\\\Python']:\n if os.path.exists(p): \n PY_PATH = p\n break\nif PY_PATH and PY_PATH not in sys.path: sys.path.append(PY_PATH)\n\n", "xprofile is not a standard library, make sure you add path to it. My guess is that when run as SIS, it doesn't find xprofile and hangs up. When releasing your SIS, either instruct that users install that separately or include inside your SIS.\nWhere would you have it installed, use that path. Here's python default directory as sample:\n\n # PyS60 1.9.x and above\n sys.path.append('c:\\\\Data\\\\Python')\n sys.path.append('e:\\\\Data\\\\Python')\n # Pys60 1.4.x or below\n sys.path.append('c:\\\\Python')\n sys.path.append('e:\\\\Python')\n\nBtw make clean exit, do this:\n\n appuifw.app.menu = [(u'Close', exit_key_handler)]\n\n" ]
[ 3, 2 ]
[]
[]
[ "nokia", "pys60", "python", "s60", "symbian" ]
stackoverflow_0000927150_nokia_pys60_python_s60_symbian.txt
Q: how to remove text between using python? how to remove text between <script> and </script> using python? A: You can use BeautifulSoup with this (and other) methods: soup = BeautifulSoup(source.lower()) to_extract = soup.findAll('script') for item in to_extract: item.extract() This actually removes the nodes from the HTML. If you wanted to leave the empty <script></script> tags you'll have to work with the item attributes rather than just extracting it from the soup. A: Are you trying to prevent XSS? Just eliminating the <script> tags will not solve all possible attacks! Here's a great list of the many ways (some of them very creative) that you could be vulnerable http://ha.ckers.org/xss.html. After reading this page you should understand why just elimintating the <script> tags using a regular expression is not robust enough. The python library lxml has a function that will robustly clean your HTML to make it safe to display. If you are sure that you just want to eliminate the <script> tags this code in lxml should work: from lxml.html import parse root = parse(filename_or_url).getroot() for element in root.iter("script"): element.drop_tree() Note: I downvoted all the solutions using regular expresions. See here why you shouldn't parse HTML using regular expressions: Using regular expressions to parse HTML: why not? Note 2: Another SO question showing HTML that is impossible to parse with regular expressions: Can you provide some examples of why it is hard to parse XML and HTML with a regex? A: According to answers posted by Pev and wr, why not to upgrade a regular expression, e.g.: pattern = r"(?is)<script[^>]*>(.*?)</script>" text = """<script>foo bar baz bar foo </script>""" re.sub(pattern, '', text) (?is) - added to ignore case and allow new lines in text. This version should also support script tags with attributes. EDIT: I can't add any comments yet, so I'm just editing my answer. I totally agree with the comment below, regexps are totally wrong for such tasks and b. soup ot lxml are a lot better. But question asked gave just a simple example and regexps should be enough for such simple task. Using Beautiful Soup for a simple text removing could just be too much (overload? I don't how to express what I mean, excuse my english). BTW I made a mistake, the code should look like this: pattern = r"(?is)(<script[^>]*>)(.*?)(</script>)" text = """<script>foo bar baz bar foo </script>""" re.sub(pattern, '\1\3', text) A: You can do this with the HTMLParser module (complicated) or use regular expressions: import re content = "asdf <script> bla </script> end" x=re.search("<script>.*?</script>", content, re.DOTALL) span = x.span() # gives (5, 27) stripped_content = content[:span[0]] + content[span[1]:] EDIT: re.DOTALL, thanks to tgray A: If you're removing everything between <script> and </script> why not just remove the entire node? Are you expecting a resig-style src and body? A: If you don't want to import any modules: string = "<script> this is some js. begone! </script>" string = string.split(' ') for i, s in enumerate(string): if s == '<script>' or s == '</script>' : del string[i] print ' '.join(string) A: Element Tree is the best simplest and sweetest package to do this. Yes, there are other ways to do it too; but don't use any 'coz they suck! (via Mark Pilgrim)
how to remove text between using python?
how to remove text between <script> and </script> using python?
[ "You can use BeautifulSoup with this (and other) methods:\nsoup = BeautifulSoup(source.lower())\nto_extract = soup.findAll('script')\nfor item in to_extract:\n item.extract()\n\nThis actually removes the nodes from the HTML. If you wanted to leave the empty <script></script> tags you'll have to work with the item attributes rather than just extracting it from the soup.\n", "Are you trying to prevent XSS? Just eliminating the <script> tags will not solve all possible attacks! Here's a great list of the many ways (some of them very creative) that you could be vulnerable http://ha.ckers.org/xss.html. After reading this page you should understand why just elimintating the <script> tags using a regular expression is not robust enough. The python library lxml has a function that will robustly clean your HTML to make it safe to display.\nIf you are sure that you just want to eliminate the <script> tags this code in lxml should work:\nfrom lxml.html import parse\n\nroot = parse(filename_or_url).getroot()\nfor element in root.iter(\"script\"):\n element.drop_tree()\n\nNote: I downvoted all the solutions using regular expresions. See here why you shouldn't parse HTML using regular expressions: Using regular expressions to parse HTML: why not?\nNote 2: Another SO question showing HTML that is impossible to parse with regular expressions: Can you provide some examples of why it is hard to parse XML and HTML with a regex?\n", "According to answers posted by Pev and wr, why not to upgrade a regular expression, e.g.:\npattern = r\"(?is)<script[^>]*>(.*?)</script>\"\ntext = \"\"\"<script>foo bar \nbaz bar foo </script>\"\"\"\nre.sub(pattern, '', text)\n\n(?is) - added to ignore case and allow new lines in text. This version should also support script tags with attributes.\nEDIT: I can't add any comments yet, so I'm just editing my answer. I totally agree with the comment below, regexps are totally wrong for such tasks and b. soup ot lxml are a lot better. But question asked gave just a simple example and regexps should be enough for such simple task. Using Beautiful Soup for a simple text removing could just be too much (overload? I don't how to express what I mean, excuse my english).\nBTW I made a mistake, the code should look like this:\npattern = r\"(?is)(<script[^>]*>)(.*?)(</script>)\"\ntext = \"\"\"<script>foo bar \nbaz bar foo </script>\"\"\"\nre.sub(pattern, '\\1\\3', text)\n\n", "You can do this with the HTMLParser module (complicated) or use regular expressions:\nimport re\ncontent = \"asdf <script> bla </script> end\"\nx=re.search(\"<script>.*?</script>\", content, re.DOTALL)\nspan = x.span() # gives (5, 27)\n\nstripped_content = content[:span[0]] + content[span[1]:]\n\nEDIT: re.DOTALL, thanks to tgray\n", "If you're removing everything between <script> and </script> why not just remove the entire node? \nAre you expecting a resig-style src and body?\n", "If you don't want to import any modules:\nstring = \"<script> this is some js. begone! </script>\"\n\nstring = string.split(' ')\n\nfor i, s in enumerate(string):\n if s == '<script>' or s == '</script>' :\n del string[i]\n\nprint ' '.join(string)\n\n", "Element Tree is the best simplest and sweetest package to do this. Yes, there are other ways to do it too; but don't use any 'coz they suck! (via Mark Pilgrim)\n" ]
[ 27, 6, 1, 0, 0, 0, 0 ]
[ "I don't know Python good enough to tell you a solution. But if you want to use that to sanitize the user input you have to be very, very careful. Removing stuff between and just doesn't catch everything. Maybe you can have a look at existing solutions (I assume Django includes something like this).\n", "example_text = \"This is some text <script> blah blah blah </script> this is some more text.\"\n\nimport re\nmyre = re.compile(\"(^.*)<script>(.*)</script>(.*$)\")\nresult = myre.match(example_text)\nresult.groups()\n <52> ('This is some text ', ' blah blah blah ', ' this is some more text.')\n\n# Text between <script> .. </script>\nresult.group(2)\n <56> 'blah blah blah'\n\n# Text outside of <script> .. </script>\nresult.group(1)+result.group(3)\n <57> 'This is some text this is some more text.'\n\n" ]
[ -1, -1 ]
[ "javascript", "python" ]
stackoverflow_0000964459_javascript_python.txt
Q: Can't seem to get list() working I am trying to convert a set object to list...for example "p=list('abc')" is not working. any ideas or is it inherent in appengine A: A set object to list is converted like so: my_list = list(my_set) I don't understand your example though. Converting a string to a list results in a list of characters: >>> list('abc') ['a', 'b', 'c'] A: if the list() command is not working for you, you could work around it like this: my_list = [] for item in my_set: my_list.append(item) hth A: There is no specific change "inherent" in appengine with respect to common aspects like lists. It is as just the same, plain python.
Can't seem to get list() working
I am trying to convert a set object to list...for example "p=list('abc')" is not working. any ideas or is it inherent in appengine
[ "A set object to list is converted like so:\nmy_list = list(my_set)\n\nI don't understand your example though. Converting a string to a list results in a list of characters:\n>>> list('abc')\n['a', 'b', 'c']\n\n", "if the list() command is not working for you, you could work around it like this:\nmy_list = []\nfor item in my_set:\n my_list.append(item)\n\nhth\n", "There is no specific change \"inherent\" in appengine with respect to common aspects like lists. It is as just the same, plain python.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "django", "google_app_engine", "list", "python" ]
stackoverflow_0000963932_django_google_app_engine_list_python.txt
Q: Serializing a Python object to/from a S60 phone I'm looking for a way to serialize generic Python objects between a CherryPy-based server and a Python client running on a Symbian phone.. Since pyS60 doesn't implement the pickle module, how would you do it? I know about Cerealizer but it requires you to register classes before use (which I'd like to avoid) and doesn't look very mature.. So, what would you use? Python 2.2's pickle module maybe, extracted from the sources? XML, JSON? Which one of the several libraries? :) A: What's wrong with using the pickle module? A: There is a json module someone wrote for PyS60. I'd simply grab that, serialize things into json and use that as the transfer method between the web/client app. For the json lib and a decent book on PyS60: http://www.mobilepythonbook.org/ A: The last versions of Python (>1.9) have the module pickle and cPickle are available Another alternative to JSON serialization is to use the netstring (look on wikipedia) format to serialize. It's actually more effective than JSON for binary objects. You can find a good netstring module here http://github.com/tuulos/aino/blob/d78c92985ff1d701ddf99c3445b97f452d4f7fe2/wp/node/netstring.py (or aino/wp/node/netstring.py)
Serializing a Python object to/from a S60 phone
I'm looking for a way to serialize generic Python objects between a CherryPy-based server and a Python client running on a Symbian phone.. Since pyS60 doesn't implement the pickle module, how would you do it? I know about Cerealizer but it requires you to register classes before use (which I'd like to avoid) and doesn't look very mature.. So, what would you use? Python 2.2's pickle module maybe, extracted from the sources? XML, JSON? Which one of the several libraries? :)
[ "What's wrong with using the pickle module?\n", "There is a json module someone wrote for PyS60. I'd simply grab that, serialize things into json and use that as the transfer method between the web/client app. \nFor the json lib and a decent book on PyS60:\nhttp://www.mobilepythonbook.org/\n", "The last versions of Python (>1.9) have the module pickle and cPickle are available\nAnother alternative to JSON serialization is to use the netstring (look on wikipedia) format to serialize. It's actually more effective than JSON for binary objects.\nYou can find a good netstring module here http://github.com/tuulos/aino/blob/d78c92985ff1d701ddf99c3445b97f452d4f7fe2/wp/node/netstring.py (or aino/wp/node/netstring.py)\n" ]
[ 2, 1, 1 ]
[]
[]
[ "pickle", "pys60", "python", "serialization" ]
stackoverflow_0000362484_pickle_pys60_python_serialization.txt
Q: How do I infer the class to which a @staticmethod belongs? I am trying to implement infer_class function that, given a method, figures out the class to which the method belongs. So far I have something like this: import inspect def infer_class(f): if inspect.ismethod(f): return f.im_self if f.im_class == type else f.im_class # elif ... what about staticmethod-s? else: raise TypeError("Can't infer the class of %r" % f) It does not work for @staticmethod-s because I was not able to come up with a way to achieve this. Any suggestions? Here's infer_class in action: >>> class Wolf(object): ... @classmethod ... def huff(cls, a, b, c): ... pass ... def snarl(self): ... pass ... @staticmethod ... def puff(k,l, m): ... pass ... >>> print infer_class(Wolf.huff) <class '__main__.Wolf'> >>> print infer_class(Wolf().huff) <class '__main__.Wolf'> >>> print infer_class(Wolf.snarl) <class '__main__.Wolf'> >>> print infer_class(Wolf().snarl) <class '__main__.Wolf'> >>> print infer_class(Wolf.puff) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 6, in infer_class TypeError: Can't infer the class of <function puff at ...> A: That's because staticmethods really aren't methods. The staticmethod descriptor returns the original function as is. There is no way to get the class via which the function was accessed. But there is no real reason to use staticmethods for methods anyway, always use classmethods. The only use that I have found for staticmethods is to store function objects as class attributes and not have them turn into methods. A: I have trouble bringing myself to actually recommend this, but it does seem to work for straightforward cases, at least: import inspect def crack_staticmethod(sm): """ Returns (class, attribute name) for `sm` if `sm` is a @staticmethod. """ mod = inspect.getmodule(sm) for classname in dir(mod): cls = getattr(mod, classname, None) if cls is not None: try: ca = inspect.classify_class_attrs(cls) for attribute in ca: o = attribute.object if isinstance(o, staticmethod) and getattr(cls, sm.__name__) == sm: return (cls, sm.__name__) except AttributeError: pass
How do I infer the class to which a @staticmethod belongs?
I am trying to implement infer_class function that, given a method, figures out the class to which the method belongs. So far I have something like this: import inspect def infer_class(f): if inspect.ismethod(f): return f.im_self if f.im_class == type else f.im_class # elif ... what about staticmethod-s? else: raise TypeError("Can't infer the class of %r" % f) It does not work for @staticmethod-s because I was not able to come up with a way to achieve this. Any suggestions? Here's infer_class in action: >>> class Wolf(object): ... @classmethod ... def huff(cls, a, b, c): ... pass ... def snarl(self): ... pass ... @staticmethod ... def puff(k,l, m): ... pass ... >>> print infer_class(Wolf.huff) <class '__main__.Wolf'> >>> print infer_class(Wolf().huff) <class '__main__.Wolf'> >>> print infer_class(Wolf.snarl) <class '__main__.Wolf'> >>> print infer_class(Wolf().snarl) <class '__main__.Wolf'> >>> print infer_class(Wolf.puff) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 6, in infer_class TypeError: Can't infer the class of <function puff at ...>
[ "That's because staticmethods really aren't methods. The staticmethod descriptor returns the original function as is. There is no way to get the class via which the function was accessed. But there is no real reason to use staticmethods for methods anyway, always use classmethods.\nThe only use that I have found for staticmethods is to store function objects as class attributes and not have them turn into methods.\n", "I have trouble bringing myself to actually recommend this, but it does seem to work for straightforward cases, at least:\nimport inspect\n\ndef crack_staticmethod(sm):\n \"\"\"\n Returns (class, attribute name) for `sm` if `sm` is a\n @staticmethod.\n \"\"\"\n mod = inspect.getmodule(sm)\n for classname in dir(mod):\n cls = getattr(mod, classname, None)\n if cls is not None:\n try:\n ca = inspect.classify_class_attrs(cls)\n for attribute in ca:\n o = attribute.object\n if isinstance(o, staticmethod) and getattr(cls, sm.__name__) == sm:\n return (cls, sm.__name__)\n except AttributeError:\n pass\n\n" ]
[ 3, 3 ]
[]
[]
[ "decorator", "inspect", "python", "static_methods" ]
stackoverflow_0000949259_decorator_inspect_python_static_methods.txt
Q: Django : Adding a property to the User class. Changing it at runtime and UserManager.create_user For various complicated reasons[1] I need to add extra properties to the Django User class. I can't use either Profile nor the "inheritance" way of doing this. (As in Extending the User model with custom fields in Django ) So what I've been doing is including the User class in my local_settings file. And adding the property to it there. This, perhaps surprisingly, seems to work in many cases. But not when I create a new User from UserManager.create_user(). So I need to patch an alternative to the UserManager.create_user() method. Looking at the source for this (in contrib.auth.models.py) I find that the class it uses to create the User is kept in a property called UserManager.model rather than referenced directly. The line is this : user = self.model(None, username, '', '', email.strip().lower(), 'placeholder', False, True, False, now, now) The problem is that this self.model (which I assume contains a reference to the User class) doesn't seem to be my patched version. So, does anyone know where this self.model is set-up in the case of UserManager? And whether I'm correct in assuming that at that point the code hasn't gone through local_settings so my patch to the User class isn't there? And if there's a better place to patch the class? cheers phil [1] To satisfy the cur ious. I need to make the User class use a different and existing table in the database, which has extra fields and constraints. Update : For future reference, it looks like Proxy Models are the way that Django's going to support what I need : http://code.djangoproject.com/ticket/10356 A: The usual way of the having site-specific user fields is to specify a user profile table in your settings.py. You can then retrieve the specific settings via a the u.user_profile() method. It's very well documented in the docs. A: You probably just need to make sure that you do the replacement/addition/monkey patch as early as you can (before the auth application is actually installed). The trouble though is that the model classes do some meta-class stuff that'd probably explain why the UserManager has the wrong class - as it will generate the original class, set the UserManager up with that class, then you'll do your stuff and things won't be the same. So in short be brutal when you replace the class. If you can extend the original class, then replace the original with the extended version: import django.contrib.auth.models from django.contrib.auth.models import User as OriginalUser class User(OriginalUser): pass # add your extra fields etc # then patch the module with your new version django.contrib.auth.models.User = User Or if that doesn't work duplicate the User class in your new class and then patch it in. It's all a bit dirty, but it may do what you want. A: If Django 1.1 beta isn't too bleeding edge for you, try proxy models.
Django : Adding a property to the User class. Changing it at runtime and UserManager.create_user
For various complicated reasons[1] I need to add extra properties to the Django User class. I can't use either Profile nor the "inheritance" way of doing this. (As in Extending the User model with custom fields in Django ) So what I've been doing is including the User class in my local_settings file. And adding the property to it there. This, perhaps surprisingly, seems to work in many cases. But not when I create a new User from UserManager.create_user(). So I need to patch an alternative to the UserManager.create_user() method. Looking at the source for this (in contrib.auth.models.py) I find that the class it uses to create the User is kept in a property called UserManager.model rather than referenced directly. The line is this : user = self.model(None, username, '', '', email.strip().lower(), 'placeholder', False, True, False, now, now) The problem is that this self.model (which I assume contains a reference to the User class) doesn't seem to be my patched version. So, does anyone know where this self.model is set-up in the case of UserManager? And whether I'm correct in assuming that at that point the code hasn't gone through local_settings so my patch to the User class isn't there? And if there's a better place to patch the class? cheers phil [1] To satisfy the cur ious. I need to make the User class use a different and existing table in the database, which has extra fields and constraints. Update : For future reference, it looks like Proxy Models are the way that Django's going to support what I need : http://code.djangoproject.com/ticket/10356
[ "The usual way of the having site-specific user fields is to specify a user profile table in your settings.py. You can then retrieve the specific settings via a the u.user_profile() method. It's very well documented in the docs.\n", "You probably just need to make sure that you do the replacement/addition/monkey patch as early as you can (before the auth application is actually installed). The trouble though is that the model classes do some meta-class stuff that'd probably explain why the UserManager has the wrong class - as it will generate the original class, set the UserManager up with that class, then you'll do your stuff and things won't be the same.\nSo in short be brutal when you replace the class. If you can extend the original class, then replace the original with the extended version:\n\n\nimport django.contrib.auth.models\nfrom django.contrib.auth.models import User as OriginalUser\n\nclass User(OriginalUser):\n pass # add your extra fields etc\n\n# then patch the module with your new version\ndjango.contrib.auth.models.User = User\n\n\nOr if that doesn't work duplicate the User class in your new class and then patch it in.\nIt's all a bit dirty, but it may do what you want.\n", "If Django 1.1 beta isn't too bleeding edge for you, try proxy models.\n" ]
[ 2, 2, 2 ]
[]
[]
[ "django", "django_authentication", "patch", "python" ]
stackoverflow_0000964569_django_django_authentication_patch_python.txt
Q: Python error: IndexError: list assignment index out of range a=[] a.append(3) a.append(7) for j in range(2,23480): a[j]=a[j-2]+(j+2)*(j+3)/2 When I write this code, it gives an error like this: Traceback (most recent call last): File "C:/Python26/tcount2.py", line 6, in <module> a[j]=a[j-2]+(j+2)*(j+3)/2 IndexError: list assignment index out of range May I know why and how to debug it? A: Change this line of code: a[j]=a[j-2]+(j+2)*(j+3)/2 to this: a.append(a[j-2] + (j+2)*(j+3)/2) A: You're adding new elements, elements that do not exist yet. Hence you need to use append: since the items do not exist yet, you cannot reference them by index. Overview of operations on mutable sequence types. for j in range(2, 23480): a.append(a[j - 2] + (j + 2) * (j + 3) / 2) A: The reason for the error is that you're trying, as the error message says, to access a portion of the list that is currently out of range. For instance, assume you're creating a list of 10 people, and you try to specify who the 11th person on that list is going to be. On your paper-pad, it might be easy to just make room for another person, but runtime objects, like the list in python, isn't that forgiving. Your list starts out empty because of this: a = [] then you add 2 elements to it, with this code: a.append(3) a.append(7) this makes the size of the list just big enough to hold 2 elements, the two you added, which has an index of 0 and 1 (python lists are 0-based). In your code, further down, you then specify the contents of element j which starts at 2, and your code blows up immediately because you're trying to say "for a list of 2 elements, please store the following value as the 3rd element". Again, lists like the one in Python usually aren't that forgiving. Instead, you're going to have to do one of two things: In some cases, you want to store into an existing element, or add a new element, depending on whether the index you specify is available or not In other cases, you always want to add a new element In your case, you want to do nbr. 2, which means you want to rewrite this line of code: a[j]=a[j-2]+(j+2)*(j+3)/2 to this: a.append(a[j-2]+(j+2)*(j+3)/2) This will append a new element to the end of the list, which is OK, instead of trying to assign a new value to element N+1, where N is the current length of the list, which isn't OK. A: At j=2 you're trying to assign to a[2], which doesn't exist yet. You probably want to use append instead. A: If you want to debug it, just change your code to print out the current index as you go: a=[] a.append(3) a.append(7) for j in range(2,23480): print j # <-- this line a[j]=a[j-2]+(j+2)*(j+3)/2 But you'll probably find that it errors out the second you access a[2] or higher; you've only added two values, but you're trying to access the 3rd and onward. Try replacing your list ([]) with a dictionary ({}); that way, you can assign to whatever numbers you like -- or, if you really want a list, initialize it with 23479 items ([0] * 23479). A: Python lists must be pre-initialzed. You need to do a = [0]*23480 Or you can append if you are adding one at a time. I think it would probably be faster to preallocate the array. A: Python does not dynamically increase the size of an array when you assign to an element. You have to use a.append(element) to add an element onto the end, or a.insert(i, element) to insert the element at the position before i.
Python error: IndexError: list assignment index out of range
a=[] a.append(3) a.append(7) for j in range(2,23480): a[j]=a[j-2]+(j+2)*(j+3)/2 When I write this code, it gives an error like this: Traceback (most recent call last): File "C:/Python26/tcount2.py", line 6, in <module> a[j]=a[j-2]+(j+2)*(j+3)/2 IndexError: list assignment index out of range May I know why and how to debug it?
[ "Change this line of code:\na[j]=a[j-2]+(j+2)*(j+3)/2\n\nto this:\na.append(a[j-2] + (j+2)*(j+3)/2)\n\n", "You're adding new elements, elements that do not exist yet. Hence you need to use append: since the items do not exist yet, you cannot reference them by index. Overview of operations on mutable sequence types.\nfor j in range(2, 23480):\n a.append(a[j - 2] + (j + 2) * (j + 3) / 2)\n\n", "The reason for the error is that you're trying, as the error message says, to access a portion of the list that is currently out of range.\nFor instance, assume you're creating a list of 10 people, and you try to specify who the 11th person on that list is going to be. On your paper-pad, it might be easy to just make room for another person, but runtime objects, like the list in python, isn't that forgiving.\nYour list starts out empty because of this:\na = []\n\nthen you add 2 elements to it, with this code:\na.append(3)\na.append(7)\n\nthis makes the size of the list just big enough to hold 2 elements, the two you added, which has an index of 0 and 1 (python lists are 0-based).\nIn your code, further down, you then specify the contents of element j which starts at 2, and your code blows up immediately because you're trying to say \"for a list of 2 elements, please store the following value as the 3rd element\".\nAgain, lists like the one in Python usually aren't that forgiving.\nInstead, you're going to have to do one of two things:\n\nIn some cases, you want to store into an existing element, or add a new element, depending on whether the index you specify is available or not\nIn other cases, you always want to add a new element\n\nIn your case, you want to do nbr. 2, which means you want to rewrite this line of code:\na[j]=a[j-2]+(j+2)*(j+3)/2\n\nto this:\na.append(a[j-2]+(j+2)*(j+3)/2)\n\nThis will append a new element to the end of the list, which is OK, instead of trying to assign a new value to element N+1, where N is the current length of the list, which isn't OK.\n", "At j=2 you're trying to assign to a[2], which doesn't exist yet. You probably want to use append instead.\n", "If you want to debug it, just change your code to print out the current index as you go:\n a=[]\n a.append(3)\n a.append(7)\n\n for j in range(2,23480):\n print j # <-- this line\n a[j]=a[j-2]+(j+2)*(j+3)/2\n\nBut you'll probably find that it errors out the second you access a[2] or higher; you've only added two values, but you're trying to access the 3rd and onward.\nTry replacing your list ([]) with a dictionary ({}); that way, you can assign to whatever numbers you like -- or, if you really want a list, initialize it with 23479 items ([0] * 23479).\n", "Python lists must be pre-initialzed. You need to do a = [0]*23480\nOr you can append if you are adding one at a time. I think it would probably be faster to preallocate the array.\n", "Python does not dynamically increase the size of an array when you assign to an element. You have to use a.append(element) to add an element onto the end, or a.insert(i, element) to insert the element at the position before i.\n" ]
[ 7, 6, 3, 1, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0000966983_python.txt
Q: Unicode Problem with SQLAlchemy I know I'm having a problem with a conversion from Unicode but I'm not sure where it's happening. I'm extracting data about a recent Eruopean trip from a directory of HTML files. Some of the location names have non-ASCII characters (such as é, ô, ü). I'm getting the data from a string representation of the the file using regex. If i print the locations as I find them, they print with the characters so the encoding must be ok: Le Pré-Saint-Gervais, France Hôtel-de-Ville, France I'm storing the data in a SQLite table using SQLAlchemy: Base = declarative_base() class Point(Base): __tablename__ = 'points' id = Column(Integer, primary_key=True) pdate = Column(Date) ptime = Column(Time) location = Column(Unicode(32)) weather = Column(String(16)) high = Column(Float) low = Column(Float) lat = Column(String(16)) lon = Column(String(16)) image = Column(String(64)) caption = Column(String(64)) def __init__(self, filename, pdate, ptime, location, weather, high, low, lat, lon, image, caption): self.filename = filename self.pdate = pdate self.ptime = ptime self.location = location self.weather = weather self.high = high self.low = low self.lat = lat self.lon = lon self.image = image self.caption = caption def __repr__(self): return "<Point('%s','%s','%s')>" % (self.filename, self.pdate, self.ptime) engine = create_engine('sqlite:///:memory:', echo=False) Base.metadata.create_all(engine) Session = sessionmaker(bind = engine) session = Session() I loop through the files and insert the data from each one into the database: for filename in filelist: # open the file and extract the information using regex such as: location_re = re.compile("<h2>(.*)</h2>",re.M) # extract other data newpoint = Point(filename, pdate, ptime, location, weather, high, low, lat, lon, image, caption) session.add(newpoint) session.commit() I see the following warning on each insert: /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/engine/default.py:230: SAWarning: Unicode type received non-unicode bind param value 'Spitalfields, United Kingdom' param.append(processors[key](compiled_params[key])) And when I try to do anything with the table such as: session.query(Point).all() I get: Traceback (most recent call last): File "./extract_trips.py", line 131, in <module> session.query(Point).all() File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/orm/query.py", line 1193, in all return list(self) File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/orm/query.py", line 1341, in instances fetch = cursor.fetchall() File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/engine/base.py", line 1642, in fetchall self.connection._handle_dbapi_exception(e, None, None, self.cursor, self.context) File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/engine/base.py", line 931, in _handle_dbapi_exception raise exc.DBAPIError.instance(statement, parameters, e, connection_invalidated=is_disconnect) sqlalchemy.exc.OperationalError: (OperationalError) Could not decode to UTF-8 column 'points_location' with text 'Le Pré-Saint-Gervais, France' None None I would like to be able to correctly store and then return the location names with the original characters intact. Any help would be much appreciated. A: I found this article that helped explain my troubles somewhat: http://www.amk.ca/python/howto/unicode#reading-and-writing-unicode-data I was able to get the desired results by using the 'codecs' module and then changing my program as follows: When opening the file: infile = codecs.open(filename, 'r', encoding='iso-8859-1') When printing the location: print location.encode('ISO-8859-1') I can now query and manipulate the data from the table without the error from before. I just have to specify the encoding when I output the text. (I still don't entirely understand how this is working so I guess it's time to learn more about Python's unicode handling...) A: Try using a column type of Unicode rather than String for the unicode columns: Base = declarative_base() class Point(Base): __tablename__ = 'points' id = Column(Integer, primary_key=True) pdate = Column(Date) ptime = Column(Time) location = Column(Unicode(32)) weather = Column(String(16)) high = Column(Float) low = Column(Float) lat = Column(String(16)) lon = Column(String(16)) image = Column(String(64)) caption = Column(String(64)) Edit: Response to comment: If you're getting warnings about unicode encodings then there are two things you can try: Convert your location to unicode. This would mean having your Point created like this: newpoint = Point(filename, pdate, ptime, unicode(location), weather, high, low, lat, lon, image, caption) The unicode conversion will produce a unicode string when passed either a string or a unicode string, so you don't need to worry about what you pass in. If that doesn't solve the encoding issues, try calling encode on your unicode objects. That would mean using code like: newpoint = Point(filename, pdate, ptime, unicode(location).encode('utf-8'), weather, high, low, lat, lon, image, caption) This step probably won't be necessary but what it essentially does is converts a unicode object from unicode code-points to a specific byte representation (in this case, utf-8). I'd expect SQLAlchemy to do this for you when you pass in unicode objects but it may not. A: From sqlalchemy.org See section 0.4.2 added new flag to String and create_engine(), assert _unicode=(True|False|'warn'|None). Defaults to False or None on create _engine() and String, 'warn' on the Unicode type. When True, results in all unicode conversion operations raising an exception when a non-unicode bytestring is passed as a bind parameter. 'warn' results in a warning. It is strongly advised that all unicode-aware applications make proper use of Python unicode objects (i.e. u'hello' and not 'hello') so that data round trips accurately. I think you are trying to input a non-unicode bytestring. Perhaps this might lead you on the right track? Some form of conversion is needed, compare 'hello' and u'hello'. Cheers
Unicode Problem with SQLAlchemy
I know I'm having a problem with a conversion from Unicode but I'm not sure where it's happening. I'm extracting data about a recent Eruopean trip from a directory of HTML files. Some of the location names have non-ASCII characters (such as é, ô, ü). I'm getting the data from a string representation of the the file using regex. If i print the locations as I find them, they print with the characters so the encoding must be ok: Le Pré-Saint-Gervais, France Hôtel-de-Ville, France I'm storing the data in a SQLite table using SQLAlchemy: Base = declarative_base() class Point(Base): __tablename__ = 'points' id = Column(Integer, primary_key=True) pdate = Column(Date) ptime = Column(Time) location = Column(Unicode(32)) weather = Column(String(16)) high = Column(Float) low = Column(Float) lat = Column(String(16)) lon = Column(String(16)) image = Column(String(64)) caption = Column(String(64)) def __init__(self, filename, pdate, ptime, location, weather, high, low, lat, lon, image, caption): self.filename = filename self.pdate = pdate self.ptime = ptime self.location = location self.weather = weather self.high = high self.low = low self.lat = lat self.lon = lon self.image = image self.caption = caption def __repr__(self): return "<Point('%s','%s','%s')>" % (self.filename, self.pdate, self.ptime) engine = create_engine('sqlite:///:memory:', echo=False) Base.metadata.create_all(engine) Session = sessionmaker(bind = engine) session = Session() I loop through the files and insert the data from each one into the database: for filename in filelist: # open the file and extract the information using regex such as: location_re = re.compile("<h2>(.*)</h2>",re.M) # extract other data newpoint = Point(filename, pdate, ptime, location, weather, high, low, lat, lon, image, caption) session.add(newpoint) session.commit() I see the following warning on each insert: /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/engine/default.py:230: SAWarning: Unicode type received non-unicode bind param value 'Spitalfields, United Kingdom' param.append(processors[key](compiled_params[key])) And when I try to do anything with the table such as: session.query(Point).all() I get: Traceback (most recent call last): File "./extract_trips.py", line 131, in <module> session.query(Point).all() File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/orm/query.py", line 1193, in all return list(self) File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/orm/query.py", line 1341, in instances fetch = cursor.fetchall() File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/engine/base.py", line 1642, in fetchall self.connection._handle_dbapi_exception(e, None, None, self.cursor, self.context) File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.4p2-py2.5.egg/sqlalchemy/engine/base.py", line 931, in _handle_dbapi_exception raise exc.DBAPIError.instance(statement, parameters, e, connection_invalidated=is_disconnect) sqlalchemy.exc.OperationalError: (OperationalError) Could not decode to UTF-8 column 'points_location' with text 'Le Pré-Saint-Gervais, France' None None I would like to be able to correctly store and then return the location names with the original characters intact. Any help would be much appreciated.
[ "I found this article that helped explain my troubles somewhat:\nhttp://www.amk.ca/python/howto/unicode#reading-and-writing-unicode-data\nI was able to get the desired results by using the 'codecs' module and then changing my program as follows:\nWhen opening the file:\ninfile = codecs.open(filename, 'r', encoding='iso-8859-1')\n\nWhen printing the location:\nprint location.encode('ISO-8859-1')\n\nI can now query and manipulate the data from the table without the error from before. I just have to specify the encoding when I output the text.\n(I still don't entirely understand how this is working so I guess it's time to learn more about Python's unicode handling...)\n", "Try using a column type of Unicode rather than String for the unicode columns:\nBase = declarative_base()\nclass Point(Base):\n __tablename__ = 'points'\n\n id = Column(Integer, primary_key=True)\n pdate = Column(Date)\n ptime = Column(Time)\n location = Column(Unicode(32))\n weather = Column(String(16))\n high = Column(Float)\n low = Column(Float)\n lat = Column(String(16))\n lon = Column(String(16))\n image = Column(String(64))\n caption = Column(String(64))\n\nEdit: Response to comment:\nIf you're getting warnings about unicode encodings then there are two things you can try:\n\nConvert your location to unicode. This would mean having your Point created like this:\nnewpoint = Point(filename, pdate, ptime, unicode(location), weather, high, low, lat, lon, image, caption)\nThe unicode conversion will produce a unicode string when passed either a string or a unicode string, so you don't need to worry about what you pass in.\nIf that doesn't solve the encoding issues, try calling encode on your unicode objects. That would mean using code like:\nnewpoint = Point(filename, pdate, ptime, unicode(location).encode('utf-8'), weather, high, low, lat, lon, image, caption)\nThis step probably won't be necessary but what it essentially does is converts a unicode object from unicode code-points to a specific byte representation (in this case, utf-8). I'd expect SQLAlchemy to do this for you when you pass in unicode objects but it may not.\n\n", "From sqlalchemy.org\nSee section 0.4.2\n\nadded new flag to String and\n create_engine(),\n assert _unicode=(True|False|'warn'|None).\n Defaults to False or None on\n create _engine() and String, 'warn' on the Unicode type. When\n True,\n results in all unicode conversion operations raising an\n exception when a\n non-unicode bytestring is passed as a bind parameter. 'warn' results\n in a warning. It is strongly advised that all unicode-aware\n applications\n make proper use of Python unicode objects (i.e. u'hello' and not\n 'hello')\n so that data round trips accurately.\n\nI think you are trying to input a non-unicode bytestring. Perhaps this might lead you on the right track? Some form of conversion is needed, compare 'hello' and u'hello'.\nCheers\n" ]
[ 11, 7, 7 ]
[]
[]
[ "character_encoding", "encoding", "python", "sqlalchemy", "unicode" ]
stackoverflow_0000966352_character_encoding_encoding_python_sqlalchemy_unicode.txt
Q: Debug variable in Python I want to separate the debug outputs from production ones by defining a variable that can be used throughput the module. It cannot be defined in environment. Any suggestions for globals reused across classes in modules? Additionally is there a way to configure this variable flag for telling App Engine that dont use this code. A: Have a look at the logging module, which is fully supported by Google App Engine. You can specify logging levels such as debug, warning, error, etc. They will show up in the dev server console, and will also be stored in the request log. If you're after executing specific code only when running the dev server, you can do this: if os.environ['SERVER_SOFTWARE'].startswith('Development'): print 'Hello world!' The SERVER_SOFTWARE variable is always set by Google App Engine. As for module specific variables; modules are objects and can have values just as any other object: my_module.debug = True A: All module-level variables are global to all classes in the module. Here's my file: mymodule.py import this import that DEBUG = True class Foo( object ): def __init__( self ): if DEBUG: print self.__class__, "__init__" # etc. class Bar( object ): def do_work( self ): if DEBUG: print self.__class__, "do_work" # etc. A single, module-level DEBUG variable will be found by all instances of these two classes. Other modules (e.g,. this.py and that.py) can have their own DEBUG variables. These would be this.DEBUG or that.DEBUG, and are unrelated.
Debug variable in Python
I want to separate the debug outputs from production ones by defining a variable that can be used throughput the module. It cannot be defined in environment. Any suggestions for globals reused across classes in modules? Additionally is there a way to configure this variable flag for telling App Engine that dont use this code.
[ "Have a look at the logging module, which is fully supported by Google App Engine. You can specify logging levels such as debug, warning, error, etc. They will show up in the dev server console, and will also be stored in the request log.\nIf you're after executing specific code only when running the dev server, you can do this:\nif os.environ['SERVER_SOFTWARE'].startswith('Development'):\n print 'Hello world!'\n\nThe SERVER_SOFTWARE variable is always set by Google App Engine.\nAs for module specific variables; modules are objects and can have values just as any other object:\nmy_module.debug = True\n\n", "All module-level variables are global to all classes in the module.\nHere's my file: mymodule.py\nimport this\nimport that\n\nDEBUG = True\n\nclass Foo( object ):\n def __init__( self ):\n if DEBUG: print self.__class__, \"__init__\"\n # etc.\n\nclass Bar( object ):\n def do_work( self ):\n if DEBUG: print self.__class__, \"do_work\"\n # etc.\n\nA single, module-level DEBUG variable will be found by all instances of these two classes. Other modules (e.g,. this.py and that.py) can have their own DEBUG variables. These would be this.DEBUG or that.DEBUG, and are unrelated.\n" ]
[ 12, 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0000966571_google_app_engine_python.txt
Q: How do cursors work in Python's DB-API? I have been using python with RDBMS' (MySQL and PostgreSQL), and I have noticed that I really do not understand how to use a cursor. Usually, one have his script connect to the DB via a client DB-API (like psycopg2 or MySQLdb): connection = psycopg2.connect(host='otherhost', etc) And then one creates a cursor: cursor = connection.cursor() And then one can issue queries and commands: cursor.execute("SELECT * FROM etc") Now where is the result of the query, I wonder? is it on the server? or a little on my client and a little on my server? And then, if we need to access some results, we fetch 'em: rows = cursor.fetchone() or rows = cursor.fetchmany() Now lets say, I do not retrieve all the rows, and decide to execute another query, what will happen to the previous results? Is their an overhead. Also, should I create a cursor for every form of command and continuously reuse it for those same commands somehow; I head psycopg2 can somehow optimize commands that are executed many times but with different values, how and is it worth it? Thx A: ya, i know it's months old :P DB-API's cursor appears to be closely modeled after SQL cursors. AFA resource(rows) management is concerned, DB-API does not specify whether the client must retrieve all the rows or DECLARE an actual SQL cursor. As long as the fetchXXX interfaces do what they're supposed to, DB-API is happy. AFA psycopg2 cursors are concerned(as you may well know), "unnamed DB-API cursors" will fetch the entire result set--AFAIK buffered in memory by libpq. "named DB-API cursors"(a psycopg2 concept that may not be portable), will request the rows on demand(fetchXXX methods). As cited by "unbeknown", executemany can be used to optimize multiple runs of the same command. However, it doesn't accommodate for the need of prepared statements; when repeat executions of a statement with different parameter sets is not directly sequential, executemany() will perform just as well as execute(). DB-API does "provide" driver authors with the ability to cache executed statements, but its implementation(what's the scope/lifetime of the statement?) is undefined, so it's impossible to set expectations across DB-API implementations. If you are loading lots of data into PostgreSQL, I would strongly recommend trying to find a way to use COPY. A: Assuming you're using PostgreSQL, the cursors probably are just implemented using the database's native cursor API. You may want to look at the source code for pg8000, a pure Python PostgreSQL DB-API module, to see how it handles cursors. You might also like to look at the PostgreSQL documentation for cursors. A: When you look here at the mysqldb documentation you can see that they implemented different strategies for cursors. So the general answer is: it depends. Edit: Here is the mysqldb API documentation. There is some info how each cursor type is behaving. The standard cursor is storing the result set in the client. So I assume there is a overhead if you don't retrieve all result rows, because even the rows you don't fetch have to be transfered to the client (potentially over the network). My guess is that it is not that different from postgresql. When you want to optimize SQL statements that you call repeatedly with many values, you should look at cursor.executemany(). It prepares a SQL statement so that it doesn't need to be parsed every time you call it: cur.executemany('INSERT INTO mytable (col1, col2) VALUES (%s, %s)', [('val1', 1), ('val2', 2)])
How do cursors work in Python's DB-API?
I have been using python with RDBMS' (MySQL and PostgreSQL), and I have noticed that I really do not understand how to use a cursor. Usually, one have his script connect to the DB via a client DB-API (like psycopg2 or MySQLdb): connection = psycopg2.connect(host='otherhost', etc) And then one creates a cursor: cursor = connection.cursor() And then one can issue queries and commands: cursor.execute("SELECT * FROM etc") Now where is the result of the query, I wonder? is it on the server? or a little on my client and a little on my server? And then, if we need to access some results, we fetch 'em: rows = cursor.fetchone() or rows = cursor.fetchmany() Now lets say, I do not retrieve all the rows, and decide to execute another query, what will happen to the previous results? Is their an overhead. Also, should I create a cursor for every form of command and continuously reuse it for those same commands somehow; I head psycopg2 can somehow optimize commands that are executed many times but with different values, how and is it worth it? Thx
[ "ya, i know it's months old :P\nDB-API's cursor appears to be closely modeled after SQL cursors. AFA resource(rows) management is concerned, DB-API does not specify whether the client must retrieve all the rows or DECLARE an actual SQL cursor. As long as the fetchXXX interfaces do what they're supposed to, DB-API is happy.\nAFA psycopg2 cursors are concerned(as you may well know), \"unnamed DB-API cursors\" will fetch the entire result set--AFAIK buffered in memory by libpq. \"named DB-API cursors\"(a psycopg2 concept that may not be portable), will request the rows on demand(fetchXXX methods).\nAs cited by \"unbeknown\", executemany can be used to optimize multiple runs of the same command. However, it doesn't accommodate for the need of prepared statements; when repeat executions of a statement with different parameter sets is not directly sequential, executemany() will perform just as well as execute(). DB-API does \"provide\" driver authors with the ability to cache executed statements, but its implementation(what's the scope/lifetime of the statement?) is undefined, so it's impossible to set expectations across DB-API implementations.\nIf you are loading lots of data into PostgreSQL, I would strongly recommend trying to find a way to use COPY.\n", "Assuming you're using PostgreSQL, the cursors probably are just implemented using the database's native cursor API. You may want to look at the source code for pg8000, a pure Python PostgreSQL DB-API module, to see how it handles cursors. You might also like to look at the PostgreSQL documentation for cursors.\n", "When you look here at the mysqldb documentation you can see that they implemented different strategies for cursors. So the general answer is: it depends.\nEdit: Here is the mysqldb API documentation. There is some info how each cursor type is behaving. The standard cursor is storing the result set in the client. So I assume there is a overhead if you don't retrieve all result rows, because even the rows you don't fetch have to be transfered to the client (potentially over the network). My guess is that it is not that different from postgresql.\nWhen you want to optimize SQL statements that you call repeatedly with many values, you should look at cursor.executemany(). It prepares a SQL statement so that it doesn't need to be parsed every time you call it:\ncur.executemany('INSERT INTO mytable (col1, col2) VALUES (%s, %s)',\n [('val1', 1), ('val2', 2)])\n\n" ]
[ 8, 2, 1 ]
[]
[]
[ "cursors", "performance", "psycopg2", "python", "rdbms" ]
stackoverflow_0000454337_cursors_performance_psycopg2_python_rdbms.txt
Q: Level control of Select inputs using Django Forms API I'm wanting to add a label= attribute to an option element of a Select form input using the Django Forms API without overwriting the Select widget's render_options method. Is this possible, if so, how? Note: I'm wanting to add a label directly to the option (this is valid in the XHTML Strict standard) not an optgroup. A: I just wrote a class to do that: from django.forms.widgets import Select from django.utils.encoding import force_unicode from itertools import chain from django.utils.html import escape, conditional_escape class ExtendedSelect(Select): """ A subclass of Select that adds the possibility to define additional properties on options. It works as Select, except that the ``choices`` parameter takes a list of 3 elements tuples containing ``(value, label, attrs)``, where ``attrs`` is a dict containing the additional attributes of the option. """ def render_options(self, choices, selected_choices): def render_option(option_value, option_label, attrs): option_value = force_unicode(option_value) selected_html = (option_value in selected_choices) and u' selected="selected"' or '' attrs_html = [] for k, v in attrs.items(): attrs_html.append('%s="%s"' % (k, escape(v))) if attrs_html: attrs_html = " " + " ".join(attrs_html) else: attrs_html = "" return u'<option value="%s"%s%s>%s</option>' % ( escape(option_value), selected_html, attrs_html, conditional_escape(force_unicode(option_label))) # Normalize to strings. selected_choices = set([force_unicode(v) for v in selected_choices]) output = [] for option_value, option_label, option_attrs in chain(self.choices, choices): if isinstance(option_label, (list, tuple)): output.append(u'<optgroup label="%s">' % escape(force_unicode(option_value))) for option in option_label: output.append(render_option(*option)) output.append(u'</optgroup>') else: output.append(render_option(option_value, option_label, option_attrs)) return u'\n'.join(output) Example: select = ExtendedSelect(choices=( (1, "option 1", {"label": "label 1"}), (2, "option 2", {"label": "label 2"}), )) A: I'm afraid this isn't possible without subclassing the Select widget to provide your own rendering, as you've guessed. The code for Select doesn't include any attributes for each <option> item. It covers the option value, the "selected" status, and the label... that's all, I'm afraid: def render_option(option_value, option_label): option_value = force_unicode(option_value) selected_html = (option_value in selected_choices) and u' selected="selected"' or '' return u'<option value="%s"%s>%s</option>' % ( escape(option_value), selected_html, conditional_escape(force_unicode(option_label)))
Level control of Select inputs using Django Forms API
I'm wanting to add a label= attribute to an option element of a Select form input using the Django Forms API without overwriting the Select widget's render_options method. Is this possible, if so, how? Note: I'm wanting to add a label directly to the option (this is valid in the XHTML Strict standard) not an optgroup.
[ "I just wrote a class to do that:\nfrom django.forms.widgets import Select\nfrom django.utils.encoding import force_unicode\nfrom itertools import chain\nfrom django.utils.html import escape, conditional_escape\n\n\nclass ExtendedSelect(Select):\n \"\"\"\n A subclass of Select that adds the possibility to define additional \n properties on options.\n\n It works as Select, except that the ``choices`` parameter takes a list of\n 3 elements tuples containing ``(value, label, attrs)``, where ``attrs``\n is a dict containing the additional attributes of the option.\n \"\"\"\n\n def render_options(self, choices, selected_choices):\n def render_option(option_value, option_label, attrs):\n option_value = force_unicode(option_value)\n selected_html = (option_value in selected_choices) and u' selected=\"selected\"' or ''\n attrs_html = []\n for k, v in attrs.items():\n attrs_html.append('%s=\"%s\"' % (k, escape(v)))\n if attrs_html:\n attrs_html = \" \" + \" \".join(attrs_html)\n else:\n attrs_html = \"\"\n return u'<option value=\"%s\"%s%s>%s</option>' % (\n escape(option_value), selected_html, attrs_html,\n conditional_escape(force_unicode(option_label)))\n # Normalize to strings.\n selected_choices = set([force_unicode(v) for v in selected_choices])\n output = []\n for option_value, option_label, option_attrs in chain(self.choices, choices):\n if isinstance(option_label, (list, tuple)):\n output.append(u'<optgroup label=\"%s\">' % escape(force_unicode(option_value)))\n for option in option_label:\n output.append(render_option(*option))\n output.append(u'</optgroup>')\n else:\n output.append(render_option(option_value, option_label,\n option_attrs))\n return u'\\n'.join(output)\n\nExample:\nselect = ExtendedSelect(choices=(\n (1, \"option 1\", {\"label\": \"label 1\"}),\n (2, \"option 2\", {\"label\": \"label 2\"}),\n ))\n\n", "I'm afraid this isn't possible without subclassing the Select widget to provide your own rendering, as you've guessed. The code for Select doesn't include any attributes for each <option> item. It covers the option value, the \"selected\" status, and the label... that's all, I'm afraid:\ndef render_option(option_value, option_label):\n option_value = force_unicode(option_value)\n selected_html = (option_value in selected_choices) and u' selected=\"selected\"' or ''\n return u'<option value=\"%s\"%s>%s</option>' % (\n escape(option_value), selected_html,\n conditional_escape(force_unicode(option_label)))\n\n" ]
[ 2, 1 ]
[]
[]
[ "django", "django_forms", "python" ]
stackoverflow_0000965082_django_django_forms_python.txt
Q: Image distortion after sending through a WSGI app in Python A lot of the time when I send image data over WSGI (using wsgiref), the image comes out distorted. As an example, examine the following: (source: evanfosmark.com) A: As you haven't posted the code, here is a simple code which correctly works with python 2.5 on windows from wsgiref.simple_server import make_server def serveImage(environ, start_response): status = '200 OK' headers = [('Content-type', 'image/png')] start_response(status, headers) return open("about.png", "rb").read() httpd = make_server('', 8000, serveImage) httpd.serve_forever() may be instead of "rb" you are using "r" A: It had to do with \n not being converted properly. I'd like to thank Alex Martelli for pointing me in the right direction. A: Maybe the result is getting truncated? Try wget or curl to fetch the file directly and cmp it to the original image; that should help debug it. Beyond that, post your full code and environment details even if it's simple.
Image distortion after sending through a WSGI app in Python
A lot of the time when I send image data over WSGI (using wsgiref), the image comes out distorted. As an example, examine the following: (source: evanfosmark.com)
[ "As you haven't posted the code, here is a simple code which correctly works\nwith python 2.5 on windows\nfrom wsgiref.simple_server import make_server\n\ndef serveImage(environ, start_response):\n status = '200 OK'\n headers = [('Content-type', 'image/png')]\n start_response(status, headers)\n\n return open(\"about.png\", \"rb\").read()\n\nhttpd = make_server('', 8000, serveImage)\nhttpd.serve_forever()\n\nmay be instead of \"rb\" you are using \"r\"\n", "It had to do with \\n not being converted properly. I'd like to thank Alex Martelli for pointing me in the right direction.\n", "Maybe the result is getting truncated? Try wget or curl to fetch the file directly and cmp it to the original image; that should help debug it. Beyond that, post your full code and environment details even if it's simple.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "python", "wsgi" ]
stackoverflow_0000967826_python_wsgi.txt
Q: installing python libraries Ok, so i've downloaded the following library: http://www.lag.net/paramiko/ and i can't seem to figure out how to install on my local machine: Mac OS X 10.4.11 A: To use the package that you got from the web-site: "python setup.py install " My advice is to use easy_install instead of downloading packages straight from the project web-site. To do this, you must first install setuptools. Then just use the command "easy_install paramiko". As you use lots of different packages, this ends up saving you lots of hassle.
installing python libraries
Ok, so i've downloaded the following library: http://www.lag.net/paramiko/ and i can't seem to figure out how to install on my local machine: Mac OS X 10.4.11
[ "To use the package that you got from the web-site: \"python setup.py install\n\"\nMy advice is to use easy_install instead of downloading packages straight from the project web-site.\nTo do this, you must first install setuptools.\nThen just use the command \"easy_install paramiko\".\nAs you use lots of different packages, this ends up saving you lots of hassle.\n" ]
[ 6 ]
[]
[]
[ "package_management", "python" ]
stackoverflow_0000968116_package_management_python.txt
Q: Python Advice for a beginner. Regex, Dictionaries etc? I'm writing my second python script to try and parse the contents of a config file and would like some noob advice. I'm not sure if its best to use regex to parse my script since its multiple lines? I've also been reading about dictionaries and wondered if this would be good practice. I'm not necessarily looking for the code just a push in the right direction. Example: My config file looks like this. Job { Name = "host.domain.com-foo" Client = host.domain.com-fd JobDefs = "DefaultJob" FileSet = "local" Write Bootstrap = "/etc/foo/host.domain.com-foo.bsr" Pool = storage-disk1 } Should I used regex, line splitting or maybe a module? If I had multiple jobs in my config file would I use a dictionary to correlate a job to a pool? A: If you can change the configuration file format, you can directly write your file as a Python file. config.py job = { 'Name' : "host.domain.com-foo", 'Client' : "host.domain.com-fd", 'JobDefs' : "DefaultJob", 'FileSet' : "local", 'Write Bootstrap' : "/etc/foo/host.domain.com-foo.bsr", 'Pool' : 'storage-disk1' } yourscript.py from config import job print job['Name'] A: There are numorous existing alternatives for this task, json, pickle and yaml to name 3. Unless you really want to implement this yourself, you should use one of these. Even if you do roll your own, following the format of one of the above is still a good idea. Also, it's a much better idea to use a parser/generator or similar tool to do the parsing, regex's are going to be harder to maintain and more inefficient for this type of task. A: If your config file can be turned into a python file, just make it a dictionary and import the module. Job = { "Name" : "host.domain.com-foo", "Client" : "host.domain.com-fd", "JobDefs" : "DefaultJob", "FileSet" : "local", "Write BootStrap" : "/etc/foo/host.domain.com-foo.bsr", "Pool" : "storage-disk1" } You can access the options by simply calling Job["Name"]..etc. The ConfigParser is easy to use as well. You can create a text file that looks like this: [Job] Name=host.domain.com-foo Client=host.domain.com-fd JobDefs=DefaultJob FileSet=local Write BootStrap=/etc/foo/host.domain.com-foo.bsr Pool=storage-disk1 Just keep it simple like one of the above. A: ConfigParser module from the standard library is probably the most Pythonic and staight-forward way to parse a configuration file that your python script is using. If you are restricted to using the particular format you have outlined, then using pyparsing is pretty good. A: I don't think a regex is adequate for parsing something like this. You could look at a true parser, such as pyparsing. Or if the file format is within your control, you might consider XML. There are standard Python libraries for parsing that.
Python Advice for a beginner. Regex, Dictionaries etc?
I'm writing my second python script to try and parse the contents of a config file and would like some noob advice. I'm not sure if its best to use regex to parse my script since its multiple lines? I've also been reading about dictionaries and wondered if this would be good practice. I'm not necessarily looking for the code just a push in the right direction. Example: My config file looks like this. Job { Name = "host.domain.com-foo" Client = host.domain.com-fd JobDefs = "DefaultJob" FileSet = "local" Write Bootstrap = "/etc/foo/host.domain.com-foo.bsr" Pool = storage-disk1 } Should I used regex, line splitting or maybe a module? If I had multiple jobs in my config file would I use a dictionary to correlate a job to a pool?
[ "If you can change the configuration file format, you can directly write your file as a Python file.\nconfig.py\njob = {\n 'Name' : \"host.domain.com-foo\",\n 'Client' : \"host.domain.com-fd\",\n 'JobDefs' : \"DefaultJob\",\n 'FileSet' : \"local\",\n 'Write Bootstrap' : \"/etc/foo/host.domain.com-foo.bsr\",\n 'Pool' : 'storage-disk1'\n}\n\nyourscript.py\nfrom config import job\n\nprint job['Name']\n\n", "There are numorous existing alternatives for this task, json, pickle and yaml to name 3. Unless you really want to implement this yourself, you should use one of these. Even if you do roll your own, following the format of one of the above is still a good idea. \nAlso, it's a much better idea to use a parser/generator or similar tool to do the parsing, regex's are going to be harder to maintain and more inefficient for this type of task.\n", "If your config file can be turned into a python file, just make it a dictionary and import the module.\nJob = { \"Name\" : \"host.domain.com-foo\",\n \"Client\" : \"host.domain.com-fd\",\n \"JobDefs\" : \"DefaultJob\",\n \"FileSet\" : \"local\",\n \"Write BootStrap\" : \"/etc/foo/host.domain.com-foo.bsr\",\n \"Pool\" : \"storage-disk1\" }\n\nYou can access the options by simply calling Job[\"Name\"]..etc.\nThe ConfigParser is easy to use as well. You can create a text file that looks like this:\n[Job]\nName=host.domain.com-foo\nClient=host.domain.com-fd\nJobDefs=DefaultJob\nFileSet=local\nWrite BootStrap=/etc/foo/host.domain.com-foo.bsr\nPool=storage-disk1\n\nJust keep it simple like one of the above.\n", "ConfigParser module from the standard library is probably the most Pythonic and staight-forward way to parse a configuration file that your python script is using.\nIf you are restricted to using the particular format you have outlined, then using pyparsing is pretty good.\n", "I don't think a regex is adequate for parsing something like this. You could look at a true parser, such as pyparsing. Or if the file format is within your control, you might consider XML. There are standard Python libraries for parsing that.\n" ]
[ 8, 5, 5, 4, 2 ]
[]
[]
[ "configuration_files", "dictionary", "python", "regex" ]
stackoverflow_0000968018_configuration_files_dictionary_python_regex.txt
Q: How do we precompile base templates in Cheetah so that #include, #extends and #import works properly in Weby How do you serve Cheetah in production? Guys can you share the setup on how to precompile and serve cheetah in production Since we dont compile templates in webpy it is getting upstream time out errors. If you could share a good best practise it would help * Jeremy wrote: For a production site, I use Cheetah with pre-compiled templates - it's very fast (the templates import especially quickly when python compiled and optimised). A bit of magic with the imp module takes a template name and a base directory (configured in a site-specific config) and loads up that template, taking care of #extends and import directives as appropriate. I don't use the built-in support for Cheetah, however. The new template library is also only imported to display the debugerror page * A: Maybe compile automagically on as needed basis: import sys import os from os import path import logging from Cheetah.Template import Template from Cheetah.Compiler import Compiler log = logging.getLogger(__name__) _import_save = __import__ def cheetah_import(name, *args, **kw): """Import function which search for Cheetah templates. When template ``*.tmpl`` is found in ``sys.path`` matching module name (and corresponding generated Python module is outdated or not existent) it will be compiled prior to actual import. """ name_parts = name.split('.') for p in sys.path: basename = path.join(p, *name_parts) tmpl_path = basename+'.tmpl' py_path = basename+'.py' if path.exists(tmpl_path): log.debug("%s found in %r", name, tmpl_path) if not path.exists(py_path) or newer(tmpl_path, py_path): log.info("cheetah compile %r -> %r", tmpl_path, py_path) output = Compiler( file=tmpl_path, moduleName=name, mainClassName=name_parts[-1], ) open(py_path, 'wb').write(str(output)) break return _import_save(name, *args, **kw) def newer(new, old): """Whether file with path ``new`` is newer then at ``old``.""" return os.stat(new).st_mtime > os.stat(old).st_mtime import __builtin__ __builtin__.__import__ = cheetah_import A: This works try:web.render('mafbase.tmpl', None, True, 'mafbase') except:pass This is what i did with you code from cheetahimport import * sys.path.append('./templates') cheetah_import('mafbase') includes dont work in the given method. This is the error i got localhost pop]$ vi code.py [mark@localhost pop]$ ./code.py 9911 http://0.0.0.0:9911/ Traceback (most recent call last): File "/home/mark/work/common/web/application.py", line 241, in process return self.handle() File "/home/mark/work/common/web/application.py", line 232, in handle return self._delegate(fn, self.fvars, args) File "/home/mark/work/common/web/application.py", line 411, in _delegate return handle_class(cls) File "/home/mark/work/common/web/application.py", line 386, in handle_class return tocall(*args) File "user.py", line 264, in proxyfunc return func(self, *args, **kw) File "/home/mark/work/pop/code.py", line 1801, in GET return web.render('subclass.html') File "/home/mark/work/common/web/cheetah.py", line 104, in render return str(compiled_tmpl) File "/usr/lib/python2.5/site-packages/Cheetah-2.0.1-py2.5-linux-i686.egg/Cheetah/Template.py", line 982, in __str__ def __str__(self): return getattr(self, mainMethName)() File "templates/mafbase.py", line 713, in respond self._handleCheetahInclude("widgetbox.html", trans=trans, includeFrom="file", raw=False) File "/usr/lib/python2.5/site-packages/Cheetah-2.0.1-py2.5-linux-i686.egg/Cheetah/Template.py", line 1512, in _handleCheetahInclude nestedTemplateClass = compiler.compile(source=source,file=file) File "/usr/lib/python2.5/site-packages/Cheetah-2.0.1-py2.5-linux-i686.egg/Cheetah/Template.py", line 693, in compile fileHash = str(hash(file))+str(os.path.getmtime(file)) File "/usr/lib/python2.5/posixpath.py", line 143, in getmtime return os.stat(filename).st_mtime OSError: [Errno 2] No such file or directory: '/home/mark/work/pop/widgetbox.html'
How do we precompile base templates in Cheetah so that #include, #extends and #import works properly in Weby
How do you serve Cheetah in production? Guys can you share the setup on how to precompile and serve cheetah in production Since we dont compile templates in webpy it is getting upstream time out errors. If you could share a good best practise it would help * Jeremy wrote: For a production site, I use Cheetah with pre-compiled templates - it's very fast (the templates import especially quickly when python compiled and optimised). A bit of magic with the imp module takes a template name and a base directory (configured in a site-specific config) and loads up that template, taking care of #extends and import directives as appropriate. I don't use the built-in support for Cheetah, however. The new template library is also only imported to display the debugerror page *
[ "Maybe compile automagically on as needed basis:\nimport sys\nimport os\nfrom os import path\nimport logging\nfrom Cheetah.Template import Template\nfrom Cheetah.Compiler import Compiler\n\nlog = logging.getLogger(__name__)\n\n_import_save = __import__\ndef cheetah_import(name, *args, **kw):\n \"\"\"Import function which search for Cheetah templates.\n\n When template ``*.tmpl`` is found in ``sys.path`` matching module\n name (and corresponding generated Python module is outdated or\n not existent) it will be compiled prior to actual import.\n \"\"\"\n name_parts = name.split('.')\n for p in sys.path:\n basename = path.join(p, *name_parts)\n tmpl_path = basename+'.tmpl'\n py_path = basename+'.py'\n if path.exists(tmpl_path):\n log.debug(\"%s found in %r\", name, tmpl_path)\n if not path.exists(py_path) or newer(tmpl_path, py_path):\n log.info(\"cheetah compile %r -> %r\", tmpl_path, py_path)\n output = Compiler(\n file=tmpl_path,\n moduleName=name,\n mainClassName=name_parts[-1],\n )\n open(py_path, 'wb').write(str(output))\n break\n return _import_save(name, *args, **kw)\n\ndef newer(new, old):\n \"\"\"Whether file with path ``new`` is newer then at ``old``.\"\"\"\n return os.stat(new).st_mtime > os.stat(old).st_mtime\n\nimport __builtin__\n__builtin__.__import__ = cheetah_import\n\n", "This works\ntry:web.render('mafbase.tmpl', None, True, 'mafbase')\nexcept:pass\n\nThis is what i did with you code\nfrom cheetahimport import *\nsys.path.append('./templates')\ncheetah_import('mafbase')\n\nincludes dont work in the given method.\nThis is the error i got\n localhost pop]$ vi code.py\n [mark@localhost pop]$ ./code.py 9911\n http://0.0.0.0:9911/\n Traceback (most recent call last):\n File \"/home/mark/work/common/web/application.py\", line 241, in process\n return self.handle()\n File \"/home/mark/work/common/web/application.py\", line 232, in handle\n return self._delegate(fn, self.fvars, args)\n File \"/home/mark/work/common/web/application.py\", line 411, in _delegate\n return handle_class(cls)\n File \"/home/mark/work/common/web/application.py\", line 386, in handle_class\n return tocall(*args)\n File \"user.py\", line 264, in proxyfunc\n return func(self, *args, **kw)\n File \"/home/mark/work/pop/code.py\", line 1801, in GET\n return web.render('subclass.html')\n File \"/home/mark/work/common/web/cheetah.py\", line 104, in render\n return str(compiled_tmpl)\n File \"/usr/lib/python2.5/site-packages/Cheetah-2.0.1-py2.5-linux-i686.egg/Cheetah/Template.py\", line 982, in __str__\n def __str__(self): return getattr(self, mainMethName)()\n File \"templates/mafbase.py\", line 713, in respond\n self._handleCheetahInclude(\"widgetbox.html\", trans=trans, includeFrom=\"file\", raw=False)\n File \"/usr/lib/python2.5/site-packages/Cheetah-2.0.1-py2.5-linux-i686.egg/Cheetah/Template.py\", line 1512, in _handleCheetahInclude\n nestedTemplateClass = compiler.compile(source=source,file=file)\n File \"/usr/lib/python2.5/site-packages/Cheetah-2.0.1-py2.5-linux-i686.egg/Cheetah/Template.py\", line 693, in compile\n fileHash = str(hash(file))+str(os.path.getmtime(file))\n File \"/usr/lib/python2.5/posixpath.py\", line 143, in getmtime\n return os.stat(filename).st_mtime\n OSError: [Errno 2] No such file or directory: '/home/mark/work/pop/widgetbox.html'\n\n" ]
[ 1, 0 ]
[]
[]
[ "inheritance", "python", "web.py" ]
stackoverflow_0000919539_inheritance_python_web.py.txt
Q: Python regex for alphanumerics not working from django import forms class ActonForm(forms.Form): creator = forms.RegexField('^[a-zA-Z0-9\-' ]$',max_length=30, min_length=3) data = {'creator': 'hello' } f = ActonForm(data) print f.is_valid() Why doesn't this work? have i made a wrong regular expression? I wanted a name field with provision for single quotes and a hyphen A: It kind of shows in the syntax highlighting. The apostrophe in the regex isn't escaped, it should be like this: forms.RegexField('^[a-zA-Z0-9\\-\' ]$',max_length=30, min_length=3) Edit: When escaping things in the regular expression, you need double backslashes. I doubled the backslash before the hyphen (not that it has to be escaped in this particular case.) Secondly, your regular expression only allows for a single character. You need to use a quantifier. + means one or more, * means 0 or more, {2,} means two or more, {3,6} means three to six. You probably want this: forms.RegexField('^[a-zA-Z0-9\\-\' ]+$',max_length=30, min_length=3) Do take care that the above regular expression will allow spaces in the start and end of the field as well. To avoid that you need a more complex regex.
Python regex for alphanumerics not working
from django import forms class ActonForm(forms.Form): creator = forms.RegexField('^[a-zA-Z0-9\-' ]$',max_length=30, min_length=3) data = {'creator': 'hello' } f = ActonForm(data) print f.is_valid() Why doesn't this work? have i made a wrong regular expression? I wanted a name field with provision for single quotes and a hyphen
[ "It kind of shows in the syntax highlighting. The apostrophe in the regex isn't escaped, it should be like this:\nforms.RegexField('^[a-zA-Z0-9\\\\-\\' ]$',max_length=30, min_length=3)\n\nEdit: When escaping things in the regular expression, you need double backslashes. I doubled the backslash before the hyphen (not that it has to be escaped in this particular case.)\nSecondly, your regular expression only allows for a single character. You need to use a quantifier. + means one or more, * means 0 or more, {2,} means two or more, {3,6} means three to six. You probably want this:\nforms.RegexField('^[a-zA-Z0-9\\\\-\\' ]+$',max_length=30, min_length=3)\n\nDo take care that the above regular expression will allow spaces in the start and end of the field as well. To avoid that you need a more complex regex.\n" ]
[ 1 ]
[]
[]
[ "google_app_engine", "python", "regex" ]
stackoverflow_0000968553_google_app_engine_python_regex.txt
Q: fetching row numbers in a database-independent way - django Let's say that I have a 'Scores' table with fields 'User','ScoreA', 'ScoreB', 'ScoreC'. In a leaderboard view I fetch and order a queryset by any one of these score fields that the visitor selects. The template paginates the queryset. The table is updated by a job on regular periods (a django command triggered by cron). I want to add a 'rank' field to the queryset so that I will have 'rank', 'User', 'ScoreA', 'ScoreB', 'ScoreC'. Moreover I want to remain database-independent (postgre is an option and for the time being it does not support row_number). A solution may be that I can modify the job, so that it also computes and writes three different ranks in three new fields ('rankA', 'rankB', 'rankC'). I hope there is a (much) better solution? A: Why can't you compute the rank in the template? {% for row in results_to_display %} <tr><td>{{forloop.counter}}</td><td>{{row.scorea}}</td>... {% endfor %} Or, you can compute the rank in the view function. def fetch_ranked_scores( request ): query = Score.objects.filter( ... ).orderby( scorea ) scores = [ r, s.scorea for r, s in enumerate(query) ] return render_to_response ( template, { 'results_to_display':scores } ) Or, you can compute the ranking in the model. class Score( models.Model ): ScoreA = models.IntegerField( ... ) def ranked_by_a( self ): return enumerate( self.objects.filter(...).orderby( scorea ) ) I think there are many, many ways to do this.
fetching row numbers in a database-independent way - django
Let's say that I have a 'Scores' table with fields 'User','ScoreA', 'ScoreB', 'ScoreC'. In a leaderboard view I fetch and order a queryset by any one of these score fields that the visitor selects. The template paginates the queryset. The table is updated by a job on regular periods (a django command triggered by cron). I want to add a 'rank' field to the queryset so that I will have 'rank', 'User', 'ScoreA', 'ScoreB', 'ScoreC'. Moreover I want to remain database-independent (postgre is an option and for the time being it does not support row_number). A solution may be that I can modify the job, so that it also computes and writes three different ranks in three new fields ('rankA', 'rankB', 'rankC'). I hope there is a (much) better solution?
[ "Why can't you compute the rank in the template?\n{% for row in results_to_display %}\n <tr><td>{{forloop.counter}}</td><td>{{row.scorea}}</td>...\n{% endfor %}\n\nOr, you can compute the rank in the view function.\ndef fetch_ranked_scores( request ):\n query = Score.objects.filter( ... ).orderby( scorea )\n scores = [ r, s.scorea for r, s in enumerate(query) ]\n return render_to_response ( template, { 'results_to_display':scores } )\n\nOr, you can compute the ranking in the model.\n class Score( models.Model ):\n ScoreA = models.IntegerField( ... )\n def ranked_by_a( self ):\n return enumerate( self.objects.filter(...).orderby( scorea ) )\n\nI think there are many, many ways to do this.\n" ]
[ 4 ]
[]
[]
[ "django", "django_orm", "python" ]
stackoverflow_0000969074_django_django_orm_python.txt
Q: Installed apps in Django - what about versions? After looking at the reusable apps chapter of Practical Django Projects and listening to the DjangoCon (Pycon?) lecture, there seems to be an emphasis on making your apps pluggable by installing them into the Python path, namely site-packages. What I don't understand is what happens when the version of one of those installed apps changes. If I update one of the apps that's installed to site-packages, then won't that break all my current projects that use it? I never noticed anything in settings.py that let's you specify the version of the app you're importing. I think in Ruby/Rails, they're able to freeze gems for this sort of situation. But what are we supposed to do in Python/Django? A: Having multiple versions of the same package gets messy (setuptools can do it, though). I've found it cleaner to put each project in its own virtualenv. We use virtualevwrapper to manage the virtualenvs easily, and the --no-site-packages option to make every project really self-contained and portable across machines. This is the recommended setup for mod_wsgi servers. A: You definitely don't want to put your Django apps into site-packages if you have more than one Django site. The best way, as Ken Arnold answered, is to use Ian Bicking's virtualenv (Virtual Python Environment Builder). This is especially true if you have to run multiple versions of Django. However, if you can run a single version of Python and Django then it might be a little easier to just install the apps into your project directory. This way if an external app gets updated you can upgrade each of your projects one at a time as you see fit. This is the structure Pinax used for external Django apps at one time, but I think it's using virtualenv + pip (instead of setuptools/distutils) now. A: What we do. We put only "3rd-party" stuff in site-packages. Django, XLRD, PIL, etc. We keep our overall project structured as a collection of packages and Django projects. Each project is a portion of the overall site. We have two separate behaviors for port 80 and port 443 (SSL). OverallProject/ aPackage/ anotherPackage/ djangoProject80/ settings.py logging.ini app_a_1/ models.py # app a, version 1 schema app_a_2/ models.py # app a, version 2 schema app_b_2/ models.py app_c_1/ models.py djangoProject443/ test/ tool/ We use a version number as part of the app name. This is the major version number, and is tied to the schema, since "uses-the-same-schema" is one definition of major release compatibility. You have to migrated the data and prove that things work in the new version. Then you can delete the old version and remove the schema from the database. Migrating the data is challenging because you can't run both apps side-by-side. Most applications have just one current version installed.
Installed apps in Django - what about versions?
After looking at the reusable apps chapter of Practical Django Projects and listening to the DjangoCon (Pycon?) lecture, there seems to be an emphasis on making your apps pluggable by installing them into the Python path, namely site-packages. What I don't understand is what happens when the version of one of those installed apps changes. If I update one of the apps that's installed to site-packages, then won't that break all my current projects that use it? I never noticed anything in settings.py that let's you specify the version of the app you're importing. I think in Ruby/Rails, they're able to freeze gems for this sort of situation. But what are we supposed to do in Python/Django?
[ "Having multiple versions of the same package gets messy (setuptools can do it, though).\nI've found it cleaner to put each project in its own virtualenv. We use virtualevwrapper to manage the virtualenvs easily, and the --no-site-packages option to make every project really self-contained and portable across machines.\nThis is the recommended setup for mod_wsgi servers.\n", "You definitely don't want to put your Django apps into site-packages if you have more than one Django site.\nThe best way, as Ken Arnold answered, is to use Ian Bicking's virtualenv (Virtual Python Environment Builder). This is especially true if you have to run multiple versions of Django.\nHowever, if you can run a single version of Python and Django then it might be a little easier to just install the apps into your project directory. This way if an external app gets updated you can upgrade each of your projects one at a time as you see fit. This is the structure Pinax used for external Django apps at one time, but I think it's using virtualenv + pip (instead of setuptools/distutils) now.\n", "What we do.\nWe put only \"3rd-party\" stuff in site-packages. Django, XLRD, PIL, etc.\nWe keep our overall project structured as a collection of packages and Django projects. Each project is a portion of the overall site. We have two separate behaviors for port 80 and port 443 (SSL).\nOverallProject/\n\n aPackage/\n anotherPackage/\n\n djangoProject80/\n settings.py\n logging.ini\n app_a_1/\n models.py # app a, version 1 schema\n app_a_2/\n models.py # app a, version 2 schema\n app_b_2/\n models.py\n app_c_1/\n models.py\n\n djangoProject443/\n\n test/\n tool/\n\nWe use a version number as part of the app name. This is the major version number, and is tied to the schema, since \"uses-the-same-schema\" is one definition of major release compatibility.\nYou have to migrated the data and prove that things work in the new version. Then you can delete the old version and remove the schema from the database. Migrating the data is challenging because you can't run both apps side-by-side.\nMost applications have just one current version installed.\n" ]
[ 5, 0, 0 ]
[]
[]
[ "django", "python", "version_control" ]
stackoverflow_0000967855_django_python_version_control.txt
Q: Calculate the center of a contour/Area I'm working on a Image-processing chain that seperates a single object by color and contour and then calculates the y-position of this object. How do I calculate the center of a contour or area with OpenCV? Opencv links: http://opencv.willowgarage.com/wiki/ http://en.wikipedia.org/wiki/OpenCV A: You can get the center of mass in the y direction by first calculating the Moments. Then the center of mass is given by yc = M01 / M00, where M01 and M00 are fields in the structure returned by the Moments call. If you just want the center of the bounding rectangle, that is also easy to do with BoundingRect. This returns you a CvRect and you can just take half of the height. Let me know if this isn't precise enough, I have sample code somewhere I can dig up for you.
Calculate the center of a contour/Area
I'm working on a Image-processing chain that seperates a single object by color and contour and then calculates the y-position of this object. How do I calculate the center of a contour or area with OpenCV? Opencv links: http://opencv.willowgarage.com/wiki/ http://en.wikipedia.org/wiki/OpenCV
[ "You can get the center of mass in the y direction by first calculating the Moments. Then the center of mass is given by yc = M01 / M00, where M01 and M00 are fields in the structure returned by the Moments call.\nIf you just want the center of the bounding rectangle, that is also easy to do with BoundingRect. This returns you a CvRect and you can just take half of the height.\nLet me know if this isn't precise enough, I have sample code somewhere I can dig up for you.\n" ]
[ 10 ]
[ "I don't exactly know what OpenCV is, but I would suggest this:\nThe Selected cluster of pixels has a maximum width at one point - w - so lets say the area has w vertical columns of pixels. Now I would weight the columns according to how many pixels the column contains, and use these column-wights to determine the Horizontal Center Point.\nThe Same Algorithm could also work for the X Center.\n" ]
[ -2 ]
[ "contour", "image_processing", "opencv", "python" ]
stackoverflow_0000968332_contour_image_processing_opencv_python.txt
Q: breakpoint in eclipse for appengine I have pydev on eclipse and would like to debug handlers. I put breakpoint on a handler and start project in debug mode. When I click on the hyperlink corresponding to handler the control does not come back to breakpoint. Am I missing something here? Also the launch is for google app engine application in python. A: I'm using eclipse with PyDev with appengine and I debug all the time, it's completely possible ! What you have to do is start the program in debug, but you have to start the dev_appserver in debug, not the handler directly. The main module you have to debug is: <path_to_gae>/dev_appserver.py With program arguments: --datastore_path=/tmp/myapp_datastore <your_app> I hope it help A: The simplest way to debug is to use the builtin python module pdb and debug from the shell. Just set the trace in the handler you want to debug. import pdb pdb.set_trace() How do U run the server, from within the eclipse or from the shell. If it is from the shell, then how does eclipse know you are even running the application; You could use an user friendly version of pdb, ipdb that also includes user friendly options like auto complete.
breakpoint in eclipse for appengine
I have pydev on eclipse and would like to debug handlers. I put breakpoint on a handler and start project in debug mode. When I click on the hyperlink corresponding to handler the control does not come back to breakpoint. Am I missing something here? Also the launch is for google app engine application in python.
[ "I'm using eclipse with PyDev with appengine and I debug all the time, it's completely possible !\nWhat you have to do is start the program in debug, but you have to start the dev_appserver in debug, not the handler directly. The main module you have to debug is:\n<path_to_gae>/dev_appserver.py\n\nWith program arguments:\n--datastore_path=/tmp/myapp_datastore <your_app>\n\nI hope it help\n", "The simplest way to debug is to use the builtin python module pdb and debug from the shell.\nJust set the trace in the handler you want to debug.\nimport pdb\npdb.set_trace()\n\nHow do U run the server, from within the eclipse or from the shell. If it is from the shell, then how does eclipse know you are even running the application;\nYou could use an user friendly version of pdb, ipdb that also includes user friendly options like auto complete.\n" ]
[ 4, 0 ]
[]
[]
[ "debugging", "eclipse", "google_app_engine", "pydev", "python" ]
stackoverflow_0000968701_debugging_eclipse_google_app_engine_pydev_python.txt
Q: Can I automatically change my PYTHONPATH when activating/deactivating a virtualenv? I would like to have a different PYTHONPATH from my usual in a particular virtualenv. How do I set this up automatically? I realize that it's possible to hack the bin/activate file, is there a better/more standard way? A: This django-users post is probably going to help you a lot. It suggests using virtualenvwrapper to wrap virtualenv, to use the add2virtualenv command. Using this, when the environment is active, you can just call: add2virtualenv directory1 directory2 ... to add the directories to your pythonpath for the current environment. It handles autonomously the PATH changes on environment switches. No black magic required. Et voila! A: Here is the hacked version of bin/activate for reference. It sets the PYTHONPATH correctly, but unsetting does not work. # This file must be used with "source bin/activate" *from bash* # you cannot run it directly deactivate () { if [ -n "$_OLD_VIRTUAL_PATH" ] ; then PATH="$_OLD_VIRTUAL_PATH" export PATH unset _OLD_VIRTUAL_PATH fi # This should detect bash and zsh, which have a hash command that must # be called to get it to forget past commands. Without forgetting # past commands the $PATH changes we made may not be respected if [ -n "$BASH" -o -n "$ZSH_VERSION" ] ; then hash -r fi if [ -n "$_OLD_VIRTUAL_PS1" ] ; then PS1="$_OLD_VIRTUAL_PS1" export PS1 unset _OLD_VIRTUAL_PS1 fi if [ -n "$_OLD_PYTHONPATH" ] ; then PYTHONPATH="$_OLD_PYTHONPATH" export PYTHONPATH unset _OLD_PYTHONPATH fi unset VIRTUAL_ENV if [ ! "$1" = "nondestructive" ] ; then # Self destruct! unset deactivate fi } # unset irrelavent variables deactivate nondestructive VIRTUAL_ENV="env_location" # Anonymized export VIRTUAL_ENV _OLD_VIRTUAL_PATH="$PATH" PATH="$VIRTUAL_ENV/bin:$PATH" export PATH _OLD_VIRTUAL_PS1="$PS1" if [ "`basename \"$VIRTUAL_ENV\"`" = "__" ] ; then # special case for Aspen magic directories # see http://www.zetadev.com/software/aspen/ PS1="[`basename \`dirname \"$VIRTUAL_ENV\"\``] $PS1" else PS1="(`basename \"$VIRTUAL_ENV\"`)$PS1" fi export PS1 # This should detect bash and zsh, which have a hash command that must # be called to get it to forget past commands. Without forgetting # past commands the $PATH changes we made may not be respected if [ -n "$BASH" -o -n "$ZSH_VERSION" ] ; then hash -r fi _OLD_PYTHONPATH="$PYTHONPATH" PYTHONPATH="new_pythonpath" #Anonymized export PYTHONPATH
Can I automatically change my PYTHONPATH when activating/deactivating a virtualenv?
I would like to have a different PYTHONPATH from my usual in a particular virtualenv. How do I set this up automatically? I realize that it's possible to hack the bin/activate file, is there a better/more standard way?
[ "This django-users post is probably going to help you a lot. It suggests using virtualenvwrapper to wrap virtualenv, to use the add2virtualenv command. Using this, when the environment is active, you can just call:\nadd2virtualenv directory1 directory2 ...\n\nto add the directories to your pythonpath for the current environment. \nIt handles autonomously the PATH changes on environment switches. No black magic required. Et voila!\n", "Here is the hacked version of bin/activate for reference. It sets the PYTHONPATH correctly, but unsetting does not work. \n\n# This file must be used with \"source bin/activate\" *from bash*\n# you cannot run it directly\n\ndeactivate () {\n if [ -n \"$_OLD_VIRTUAL_PATH\" ] ; then\n PATH=\"$_OLD_VIRTUAL_PATH\"\n export PATH\n unset _OLD_VIRTUAL_PATH\n fi \n\n # This should detect bash and zsh, which have a hash command that must\n # be called to get it to forget past commands. Without forgetting\n # past commands the $PATH changes we made may not be respected\n if [ -n \"$BASH\" -o -n \"$ZSH_VERSION\" ] ; then\n hash -r\n fi \n\n if [ -n \"$_OLD_VIRTUAL_PS1\" ] ; then\n PS1=\"$_OLD_VIRTUAL_PS1\"\n export PS1 \n unset _OLD_VIRTUAL_PS1\n fi \n\n if [ -n \"$_OLD_PYTHONPATH\" ] ; then\n PYTHONPATH=\"$_OLD_PYTHONPATH\"\n export PYTHONPATH \n unset _OLD_PYTHONPATH\n fi \n\n unset VIRTUAL_ENV\n if [ ! \"$1\" = \"nondestructive\" ] ; then\n # Self destruct!\n unset deactivate\n fi \n}\n\n# unset irrelavent variables\ndeactivate nondestructive\nVIRTUAL_ENV=\"env_location\" # Anonymized\nexport VIRTUAL_ENV\n\n_OLD_VIRTUAL_PATH=\"$PATH\"\nPATH=\"$VIRTUAL_ENV/bin:$PATH\"\nexport PATH\n\n_OLD_VIRTUAL_PS1=\"$PS1\"\nif [ \"`basename \\\"$VIRTUAL_ENV\\\"`\" = \"__\" ] ; then\n # special case for Aspen magic directories\n # see http://www.zetadev.com/software/aspen/\n PS1=\"[`basename \\`dirname \\\"$VIRTUAL_ENV\\\"\\``] $PS1\"\nelse\n PS1=\"(`basename \\\"$VIRTUAL_ENV\\\"`)$PS1\"\nfi\nexport PS1\n\n# This should detect bash and zsh, which have a hash command that must\n# be called to get it to forget past commands. Without forgetting\n# past commands the $PATH changes we made may not be respected\nif [ -n \"$BASH\" -o -n \"$ZSH_VERSION\" ] ; then\n hash -r\nfi\n\n_OLD_PYTHONPATH=\"$PYTHONPATH\"\nPYTHONPATH=\"new_pythonpath\" #Anonymized\nexport PYTHONPATH\n\n" ]
[ 19, 2 ]
[]
[]
[ "python", "virtualenv" ]
stackoverflow_0000969553_python_virtualenv.txt
Q: Binary file IO in python, where to start? As a self-taught python hobbyist, how would I go about learning to import and export binary files using standard formats? I'd like to implement a script that takes ePub ebooks (XHTML + CSS in a zip) and converts it to a mobipocket (Palmdoc) format in order to allow the Amazon Kindle to read it (as part of a larger project that I'm working on). There is already an awesome open-source project for managing ebook libraries : Calibre. I wanted to try implementing this on my own as a learning/self-teaching exercise. I started looking at their python source code and realized that I have no idea what is going on. Of course, the big danger in being self-taught at anything is not knowing what you don't know. In this case, I know that I don't know much about these binary files and how to work with them in python code (struct?). But I think I'm probably missing a lot of knowledge about binary files in general and I'd like some help understanding how to work with them. Here is a detailed overview of the mobi/palmdoc headers. Thanks! Edit: No question, good point! Do you have any tips on how to gain a basic knowledge of working with binary files? Python-specific would be helpful but other approaches could also be useful. TOM:Edited as question, added intro / better title A: You should probably start with the struct module, as you pointed to in your question, and of course, open the file as a binary. Basically you just start at the beginning of the file and pick it apart piece by piece. It's a hassle, but not a huge problem. If the files are compressed or encrypted, things can get more difficult. It's helpful if you start with a file that you know the contents of so you're not guessing all the time. Try it a bit, and maybe you'll evolve more specific questions. A: If you want to construct and analyse binary files the struct module will give you the basic tools, but it isn't very friendly, especially if you want to look at things that aren't a whole number of bytes. There are a few modules that can help, such as BitVector, bitarray and bitstring. (I favour bitstring, but I wrote it and so may be biased). For parsing binary formats the hachoir module is very good, but I suspect it's too high-level for your current needs. A: For teaching yourself python tools that work with binary files, this will get you going. Fun too. Exercises with binaries, zips, images... lots more.
Binary file IO in python, where to start?
As a self-taught python hobbyist, how would I go about learning to import and export binary files using standard formats? I'd like to implement a script that takes ePub ebooks (XHTML + CSS in a zip) and converts it to a mobipocket (Palmdoc) format in order to allow the Amazon Kindle to read it (as part of a larger project that I'm working on). There is already an awesome open-source project for managing ebook libraries : Calibre. I wanted to try implementing this on my own as a learning/self-teaching exercise. I started looking at their python source code and realized that I have no idea what is going on. Of course, the big danger in being self-taught at anything is not knowing what you don't know. In this case, I know that I don't know much about these binary files and how to work with them in python code (struct?). But I think I'm probably missing a lot of knowledge about binary files in general and I'd like some help understanding how to work with them. Here is a detailed overview of the mobi/palmdoc headers. Thanks! Edit: No question, good point! Do you have any tips on how to gain a basic knowledge of working with binary files? Python-specific would be helpful but other approaches could also be useful. TOM:Edited as question, added intro / better title
[ "You should probably start with the struct module, as you pointed to in your question, and of course, open the file as a binary.\nBasically you just start at the beginning of the file and pick it apart piece by piece. It's a hassle, but not a huge problem. If the files are compressed or encrypted, things can get more difficult. It's helpful if you start with a file that you know the contents of so you're not guessing all the time.\nTry it a bit, and maybe you'll evolve more specific questions. \n", "If you want to construct and analyse binary files the struct module will give you the basic tools, but it isn't very friendly, especially if you want to look at things that aren't a whole number of bytes.\nThere are a few modules that can help, such as BitVector, bitarray and bitstring. (I favour bitstring, but I wrote it and so may be biased). \nFor parsing binary formats the hachoir module is very good, but I suspect it's too high-level for your current needs.\n", "For teaching yourself python tools that work with binary files, \nthis will get you going. Fun too. Exercises with binaries, zips, images... lots more.\n" ]
[ 10, 2, 0 ]
[]
[]
[ "binary", "epub", "io", "mobipocket", "python" ]
stackoverflow_0000967652_binary_epub_io_mobipocket_python.txt
Q: Subclassing list I want create a DataSet class which is basically a list of samples. But I need to override each insertion operation to the DataSet. Is there any simple way to do this without writing my own append, extend, iadd etc. ? UPDATE: I want to add a backpointer to each sample, holding index of the sample in the DataSet. This is needed to the processing algorithm I use. I have a solution, but it seems unelegant - a renumber() function -- it ensures that the backpointers are valid. A: I don't know of a way of doing what you're asking -- overriding mutators without overriding them. With a class decorator, however, you can "automate" the overriding versions (assuming each can be achieved by wrapping the corresponding method in the base class), so it's not too bad... Suppose for example that what you want to do is add a "modified" flag, true if the data may have been changed since the last call to .save (a method of yours which persist the data and sets self.modified to False). Then...: def wrapMethod(cls, n): f = getattr(cls, n) def wrap(self, *a): self.dirty = True return f(self, *a) return wrap def wrapListMutators(cls): for n in '''__setitem__ __delitem__ __iadd__ __imul__ append extend insert pop remove reverse sort'''.split(): f = wrapMethod(cls, n) setattr(cls, n, f) return cls @wrapListMutators class DataSet(list): dirty = False def save(self): self.dirty = False This syntax requires Python 2.6 or better, but, in earlier Python versions (ones which only support decorators on def statements, not on class statements; or even very old ones that don't support decorators at all), you just need to change the very last part (the class statement), to: class DataSet(list): dirty = False def save(self): self.dirty = False DataSet = wrapListMutators(DataSet) IOW, the neat decorator syntax is just a small amount of syntax sugar on top of a normal function call which takes the class as the argument and reassigns it. Edit: now that you have edited your question to clarify your exact requirements -- maintain on each item a field bp such that, for all i, theset[i].bp == i -- it's easier to weigh the pro and con of various approaches. You could adapt the approach I sketched, but instead of self.dirty assignment before the call to the wrapped method, have a self.renumber() call after it, i.e.: def wrapMethod(cls, n): f = getattr(cls, n) def wrap(self, *a): temp = f(self, *a) self.renumber() return temp return wrap this meets your stated requirements, but in many cases it will do far more work than necessary: for example, when you append an item, this needlessly "renumbers" all existing ones (to the same values they already had). But how could any fully automated approach "know" which items, if any, it must recompute the .bp of, without O(N) effort? At least it must look at each and every one of them (since you don't want to separately code, e.g., append vs insert &c), and that's already O(N). So this will be acceptable only if it's OK for every single change to the list to be O(N) (basically only if the list always stays small and/or doesn't change often). A more fruitful idea might be to not maintain .bp values all the time, but only "just in time" when needed. Make bp a (read-only) property, calling a method which checks if the container is "dirty" (where the "dirty" flag in the container is maintained using the automated code I've already given) and only then renumbers the container (and sets its "dirty" attribute to False). This will work well when the list typically is subject to a burst of changes and only then do you need to access the items' bp for a while, then another bunch of changes, etc. Such bursty alternation between changing and reading is not rare in real-world containers, but only you can know whether it applies in your specific case! To get performance beyond this I think you need to do some manual coding on top of this general approach to take advantage of frequent special cases. For example, append may be called very often, and the amount of work to do in a special-cases append is really small, so it may well be worth your while to write those two or three lines of code (not setting the dirty bit for that case). One caveat: no approach will work (indeed your requirement becomes self contradictory) if any item is present twice in the list -- which of course is perfectly possible unless you take precautions to avoid it (you could easily diagnose it in renumber -- by keeping a set of elements already seen and raising an exception on any duplication -- if that's not too late for you; it's harder to diagnose "on the fly", i.e. at the time of a mutation that causes a duplicate, if that's what you require). Maybe you can relax your requirement so that, if an item is present twice, that's OK and the bp can just indicate one of the indices; or make bp into a set of indices where the element is present (that would also offer a smooth approach to the case of getting bp from an element that's not in the list). Etc, etc; I recommend you consider (and document!) all of these corner cases in depth -- correctness before performance!
Subclassing list
I want create a DataSet class which is basically a list of samples. But I need to override each insertion operation to the DataSet. Is there any simple way to do this without writing my own append, extend, iadd etc. ? UPDATE: I want to add a backpointer to each sample, holding index of the sample in the DataSet. This is needed to the processing algorithm I use. I have a solution, but it seems unelegant - a renumber() function -- it ensures that the backpointers are valid.
[ "I don't know of a way of doing what you're asking -- overriding mutators without overriding them. With a class decorator, however, you can \"automate\" the overriding versions (assuming each can be achieved by wrapping the corresponding method in the base class), so it's not too bad...\nSuppose for example that what you want to do is add a \"modified\" flag, true if the data may have been changed since the last call to .save (a method of yours which persist the data and sets self.modified to False).\nThen...:\ndef wrapMethod(cls, n):\n f = getattr(cls, n)\n def wrap(self, *a):\n self.dirty = True\n return f(self, *a)\n return wrap\n\ndef wrapListMutators(cls):\n for n in '''__setitem__ __delitem__ __iadd__ __imul__\n append extend insert pop remove reverse sort'''.split():\n f = wrapMethod(cls, n)\n setattr(cls, n, f)\n return cls\n\n@wrapListMutators\nclass DataSet(list):\n dirty = False\n def save(self): self.dirty = False\n\nThis syntax requires Python 2.6 or better, but, in earlier Python versions (ones which only support decorators on def statements, not on class statements; or even very old ones that don't support decorators at all), you just need to change the very last part (the class statement), to:\nclass DataSet(list):\n dirty = False\n def save(self): self.dirty = False\nDataSet = wrapListMutators(DataSet)\n\nIOW, the neat decorator syntax is just a small amount of syntax sugar on top of a normal function call which takes the class as the argument and reassigns it.\nEdit: now that you have edited your question to clarify your exact requirements -- maintain on each item a field bp such that, for all i, theset[i].bp == i -- it's easier to weigh the pro and con of various approaches.\nYou could adapt the approach I sketched, but instead of self.dirty assignment before the call to the wrapped method, have a self.renumber() call after it, i.e.:\ndef wrapMethod(cls, n):\n f = getattr(cls, n)\n def wrap(self, *a):\n temp = f(self, *a)\n self.renumber()\n return temp\n return wrap\n\nthis meets your stated requirements, but in many cases it will do far more work than necessary: for example, when you append an item, this needlessly \"renumbers\" all existing ones (to the same values they already had). But how could any fully automated approach \"know\" which items, if any, it must recompute the .bp of, without O(N) effort? At least it must look at each and every one of them (since you don't want to separately code, e.g., append vs insert &c), and that's already O(N).\nSo this will be acceptable only if it's OK for every single change to the list to be O(N) (basically only if the list always stays small and/or doesn't change often).\nA more fruitful idea might be to not maintain .bp values all the time, but only \"just in time\" when needed. Make bp a (read-only) property, calling a method which checks if the container is \"dirty\" (where the \"dirty\" flag in the container is maintained using the automated code I've already given) and only then renumbers the container (and sets its \"dirty\" attribute to False).\nThis will work well when the list typically is subject to a burst of changes and only then do you need to access the items' bp for a while, then another bunch of changes, etc. Such bursty alternation between changing and reading is not rare in real-world containers, but only you can know whether it applies in your specific case!\nTo get performance beyond this I think you need to do some manual coding on top of this general approach to take advantage of frequent special cases. For example, append may be called very often, and the amount of work to do in a special-cases append is really small, so it may well be worth your while to write those two or three lines of code (not setting the dirty bit for that case).\nOne caveat: no approach will work (indeed your requirement becomes self contradictory) if any item is present twice in the list -- which of course is perfectly possible unless you take precautions to avoid it (you could easily diagnose it in renumber -- by keeping a set of elements already seen and raising an exception on any duplication -- if that's not too late for you; it's harder to diagnose \"on the fly\", i.e. at the time of a mutation that causes a duplicate, if that's what you require). Maybe you can relax your requirement so that, if an item is present twice, that's OK and the bp can just indicate one of the indices; or make bp into a set of indices where the element is present (that would also offer a smooth approach to the case of getting bp from an element that's not in the list). Etc, etc; I recommend you consider (and document!) all of these corner cases in depth -- correctness before performance!\n" ]
[ 5 ]
[]
[]
[ "python" ]
stackoverflow_0000970425_python.txt
Q: What is the most state-of-the-art, pure python, XML parser available? Considering that I want to write python code that would run on Google App Engine and also inside jython, C-extensions are not an option. Amara was a nice library, but due to its C-extensions, I can't use it for either of these platforms. A: ElementTree is very nice. It's also part of 2.5. A: There's also Beautiful Soup (which may be geared more toward HTML, but it also does XML). A: xml.sax is a builtin SAX parser A: I would normally recommend lxml, but since that uses a C-library (libxml) the alternative would have to be, as Aaron has already suggested, ElementTree (as far as I know there is both a pure python and a c implementation of it available). Found this via google search Good luck!
What is the most state-of-the-art, pure python, XML parser available?
Considering that I want to write python code that would run on Google App Engine and also inside jython, C-extensions are not an option. Amara was a nice library, but due to its C-extensions, I can't use it for either of these platforms.
[ "ElementTree is very nice. It's also part of 2.5.\n", "There's also Beautiful Soup (which may be geared more toward HTML, but it also does XML).\n", "xml.sax is a builtin SAX parser\n", "I would normally recommend lxml, but since that uses a C-library (libxml) the alternative would have to be, as Aaron has already suggested, ElementTree (as far as I know there is both a pure python and a c implementation of it available).\nFound this via google search\nGood luck!\n" ]
[ 8, 4, 1, 1 ]
[]
[]
[ "google_app_engine", "jython", "python", "xml" ]
stackoverflow_0000970531_google_app_engine_jython_python_xml.txt
Q: Is a file on the same filesystem as another file in python? Is there a simple way of finding out if a file is on the same filesystem as another file? The following command: import shutil shutil.move('filepatha', 'filepathb') will try and rename the file (if it's on the same filesystem), otherwise it will copy it, then unlink. I want to find out before calling this command whether it will preform the quick or slow option, how do I do this? A: Use os.stat (on a filename) or os.fstat (on a file descriptor). The st_dev of the result will be the device number. If they are on the same file system, it will be the same in both. import os def same_fs(file1, file2): dev1 = os.stat(file1).st_dev dev2 = os.stat(file2).st_dev return dev1 == dev2
Is a file on the same filesystem as another file in python?
Is there a simple way of finding out if a file is on the same filesystem as another file? The following command: import shutil shutil.move('filepatha', 'filepathb') will try and rename the file (if it's on the same filesystem), otherwise it will copy it, then unlink. I want to find out before calling this command whether it will preform the quick or slow option, how do I do this?
[ "Use os.stat (on a filename) or os.fstat (on a file descriptor). The st_dev of the result will be the device number. If they are on the same file system, it will be the same in both.\nimport os\n\ndef same_fs(file1, file2):\n dev1 = os.stat(file1).st_dev\n dev2 = os.stat(file2).st_dev\n return dev1 == dev2\n\n" ]
[ 11 ]
[]
[]
[ "filesystems", "python" ]
stackoverflow_0000970742_filesystems_python.txt
Q: Testing for mysterious load errors in python/django This is related to this Configure Apache to recover from mod_python errors, although I've since stopped assuming that this has anything to do with mod_python. Essentially, I have a problem that I wasn't able to reproduce consistently and I wanted some feedback on whether the proposed solution seems likely and some potential ways to try and reproduce this problem. The setup: a django-powered site would begin throwing errors after a few days of use. They were always ImportErrors or ImproperlyConfigured errors, which amount to the same thing, since the message always specified trouble loading some module referenced in the settings.py file. It was not generally the same class. I am using preforked apache with 8 forked children, and whenever this problem would come up, one process would be broken and seven would be fine. Once broken, every request (with Debug On in the apache conf) would display the same trace every time it served a request, even if the failed load is not relevant to the particular request. An httpd restart always made the problem go away in the short run. Noted problems: installation and updates are performed via svn with some post-update scripts. A few .pyc files accidentally were checked into the repository. Additionally, the project itself was owned by one user (not apache, although apache had permissions on the project) and there was a persistent plugin that ended up getting backgrounded as root. I call these noted problems because they would be wrong whether or not I noticed this error, and hence I have fixed them. The project is owned by apache and the plugin is backgrounded as apache. All .pyc files are out of the repository, and they are all force-recompiled after each checkout while the server and plugin have been stopped. What I want to know is Do these configuration disasters seem like a likely explanation for sporadic ImportErrors? If there is still a problem somewhere else in my code, how would I best reproduce it? As for 2, my approach thus far has been to write some stress tests that repeatedly request the same page so as to execute common code paths. Incidentally, this has been running without incident for about 2 days since the fix, but the problem was observed with 1 to 10 day intervals between. A: "Do these configuration disasters seem like a likely explanation for sporadic ImportErrors" Yes. An old .pyc file is a disaster of the first magnitude. We develop on Windows, but run production on Red Hat Linux. An accidentally moved .pyc file is an absolute mystery to debug because (1) it usually runs and (2) it has a Windows filename for the original source, making the traceback error absolutely senseless. I spent hours staring at logs -- on linux -- wondering why the file was "C:\This\N\That". "If there is still a problem somewhere else in my code, how would I best reproduce it?" Before reproducing errors, you should try to prevent them. First, create unit tests to exercise everything. Start with Django's tests.py testing. Then expand to unittest for all non-Django components. Then write yourself a "run_tests" script that runs every test you own. Run this periodically. Daily isn't often enough. Second, be sure you're using logging. Heavily. Third, wrap anything that uses external resources in generic exception-logging blocks like this. try: some_external_resource_processing() except Exception, e: logger.exception( e ) raise This will help you pinpoint problems with external resources. Files and databases are often the source of bad behavior due to permission or access problems. At this point, you have prevented a large number of errors. If you want to run cyclic load testing, that's not a bad idea either. Use unittest for this. class SomeLoadtest( unittest.TestCase ): def test_something( self ): self.connection = urllib2.urlopen( "localhost:8000/some/path" ) results = self.connection.read() This isn't the best way to do things, but it shows one approach. You might want to start using Selenium to test the web site "from the outside" as a complement to your unittests.
Testing for mysterious load errors in python/django
This is related to this Configure Apache to recover from mod_python errors, although I've since stopped assuming that this has anything to do with mod_python. Essentially, I have a problem that I wasn't able to reproduce consistently and I wanted some feedback on whether the proposed solution seems likely and some potential ways to try and reproduce this problem. The setup: a django-powered site would begin throwing errors after a few days of use. They were always ImportErrors or ImproperlyConfigured errors, which amount to the same thing, since the message always specified trouble loading some module referenced in the settings.py file. It was not generally the same class. I am using preforked apache with 8 forked children, and whenever this problem would come up, one process would be broken and seven would be fine. Once broken, every request (with Debug On in the apache conf) would display the same trace every time it served a request, even if the failed load is not relevant to the particular request. An httpd restart always made the problem go away in the short run. Noted problems: installation and updates are performed via svn with some post-update scripts. A few .pyc files accidentally were checked into the repository. Additionally, the project itself was owned by one user (not apache, although apache had permissions on the project) and there was a persistent plugin that ended up getting backgrounded as root. I call these noted problems because they would be wrong whether or not I noticed this error, and hence I have fixed them. The project is owned by apache and the plugin is backgrounded as apache. All .pyc files are out of the repository, and they are all force-recompiled after each checkout while the server and plugin have been stopped. What I want to know is Do these configuration disasters seem like a likely explanation for sporadic ImportErrors? If there is still a problem somewhere else in my code, how would I best reproduce it? As for 2, my approach thus far has been to write some stress tests that repeatedly request the same page so as to execute common code paths. Incidentally, this has been running without incident for about 2 days since the fix, but the problem was observed with 1 to 10 day intervals between.
[ "\"Do these configuration disasters seem like a likely explanation for sporadic ImportErrors\"\nYes. An old .pyc file is a disaster of the first magnitude.\nWe develop on Windows, but run production on Red Hat Linux. An accidentally moved .pyc file is an absolute mystery to debug because (1) it usually runs and (2) it has a Windows filename for the original source, making the traceback error absolutely senseless. I spent hours staring at logs -- on linux -- wondering why the file was \"C:\\This\\N\\That\".\n\"If there is still a problem somewhere else in my code, how would I best reproduce it?\"\nBefore reproducing errors, you should try to prevent them.\nFirst, create unit tests to exercise everything. \nStart with Django's tests.py testing. Then expand to unittest for all non-Django components. Then write yourself a \"run_tests\" script that runs every test you own. Run this periodically. Daily isn't often enough.\nSecond, be sure you're using logging. Heavily. \nThird, wrap anything that uses external resources in generic exception-logging blocks like this.\ntry:\n some_external_resource_processing()\nexcept Exception, e:\n logger.exception( e )\n raise\n\nThis will help you pinpoint problems with external resources. Files and databases are often the source of bad behavior due to permission or access problems.\nAt this point, you have prevented a large number of errors. If you want to run cyclic load testing, that's not a bad idea either. Use unittest for this. \nclass SomeLoadtest( unittest.TestCase ):\n def test_something( self ):\n self.connection = urllib2.urlopen( \"localhost:8000/some/path\" )\n results = self.connection.read()\n\nThis isn't the best way to do things, but it shows one approach. You might want to start using Selenium to test the web site \"from the outside\" as a complement to your unittests.\n" ]
[ 2 ]
[]
[]
[ "apache", "configuration", "django", "python" ]
stackoverflow_0000970953_apache_configuration_django_python.txt
Q: Opencv sort sequences in python I'm using the Python OpenCV bindings to find the contours in an Image. I'm know looking for the possibility to sort this sequence. It seems the usual python ways for list sorting don't apply here because of the linked list structure generated from OpenCV. Do you know a good way to sort the Contours by Size (Area/BoundingRectangle) in python? Is it possible to give some example code? A: You have to be able to look at an entire sequence in order to sort it (easily). Thus you should copy it to sort it. I would do something like contourList = list(<your linked list>) def sizeKey(countour): <get size from contour> contourList.sort(key = sizeKey) If everything is not being stored in memory already you can also look at external sorting algorithms.
Opencv sort sequences in python
I'm using the Python OpenCV bindings to find the contours in an Image. I'm know looking for the possibility to sort this sequence. It seems the usual python ways for list sorting don't apply here because of the linked list structure generated from OpenCV. Do you know a good way to sort the Contours by Size (Area/BoundingRectangle) in python? Is it possible to give some example code?
[ "You have to be able to look at an entire sequence in order to sort it (easily). Thus you should copy it to sort it.\nI would do something like\n contourList = list(<your linked list>)\n def sizeKey(countour):\n <get size from contour>\n contourList.sort(key = sizeKey)\n\nIf everything is not being stored in memory already you can also look at external sorting algorithms.\n" ]
[ 2 ]
[]
[]
[ "contour", "image_processing", "opencv", "python", "sorting" ]
stackoverflow_0000971629_contour_image_processing_opencv_python_sorting.txt
Q: Interfacing web crawler with Django front end I'm trying to do three things. One: crawl and archive, at least daily, a predefined set of sites. Two: run overnight batch python scripts on this data (text classification). Three: expose a Django based front end to users to let them search the crawled data. I've been playing with Apache Nutch/Lucene but getting it to play nice with Django just seems too difficult when I could just use another crawler engine. Question 950790 suggests I could just write the crawler in Django itself, but I'm not sure how to go about this. Basically - any pointers to writing a crawler in Django or an existing python crawler that I could adapt? Or should I incorporate 'turning into Django-friendly stuff' in step two and write some glue code? Or, finally, should I abandon Django altogether? I really need something that can search quickly from the front end, though. A: If you insert your django project's app directories into sys.path, you can write standard Python scripts that utilize the Django ORM functionality. We have an /admin/ directory that contains scripts to perform various tasks-- at the top of each script is a block that looks like: sys.path.insert(0,os.path.abspath('../my_django_project')) sys.path.insert(0,os.path.abspath('../')) sys.path.insert(0,os.path.abspath('../../')) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' Then it's just a matter of using your tool of choice to crawl the web and using the Django database API to store the data. A: You write your own crawler using urllib2 to get the pages and Beautiful Soup to parse the HTML looking for the content. Here's an example of reading a page: http://docs.python.org/library/urllib2.html#examples Here's an example of parsing the page: http://www.crummy.com/software/BeautifulSoup/documentation.html#Parsing HTML A: If you don't want to write crawler using Django ORM (or already have working crawler) you could share database between crawler and Django-powred front-end. To be able to search (and edit) existing database using Django admin you should create Django models. The easy way for that is described here: http://docs.djangoproject.com/en/dev/howto/legacy-databases/
Interfacing web crawler with Django front end
I'm trying to do three things. One: crawl and archive, at least daily, a predefined set of sites. Two: run overnight batch python scripts on this data (text classification). Three: expose a Django based front end to users to let them search the crawled data. I've been playing with Apache Nutch/Lucene but getting it to play nice with Django just seems too difficult when I could just use another crawler engine. Question 950790 suggests I could just write the crawler in Django itself, but I'm not sure how to go about this. Basically - any pointers to writing a crawler in Django or an existing python crawler that I could adapt? Or should I incorporate 'turning into Django-friendly stuff' in step two and write some glue code? Or, finally, should I abandon Django altogether? I really need something that can search quickly from the front end, though.
[ "If you insert your django project's app directories into sys.path, you can write standard Python scripts that utilize the Django ORM functionality. We have an /admin/ directory that contains scripts to perform various tasks-- at the top of each script is a block that looks like:\nsys.path.insert(0,os.path.abspath('../my_django_project'))\nsys.path.insert(0,os.path.abspath('../'))\nsys.path.insert(0,os.path.abspath('../../'))\nos.environ['DJANGO_SETTINGS_MODULE'] = 'settings'\n\nThen it's just a matter of using your tool of choice to crawl the web and using the Django database API to store the data.\n", "You write your own crawler using urllib2 to get the pages and Beautiful Soup to parse the HTML looking for the content.\nHere's an example of reading a page:\nhttp://docs.python.org/library/urllib2.html#examples\nHere's an example of parsing the page:\nhttp://www.crummy.com/software/BeautifulSoup/documentation.html#Parsing HTML\n", "If you don't want to write crawler using Django ORM (or already have working crawler) you could share database between crawler and Django-powred front-end.\nTo be able to search (and edit) existing database using Django admin you should create Django models.\nThe easy way for that is described here:\nhttp://docs.djangoproject.com/en/dev/howto/legacy-databases/ \n" ]
[ 3, 2, 1 ]
[]
[]
[ "django", "python", "web_crawler" ]
stackoverflow_0000971660_django_python_web_crawler.txt
Q: automatic keystroke to stay logged in I have a web based email application that logs me out after 10 minutes of inactivity ("For security reasons"). I would like to write something that either a) imitates a keystroke b) pings an ip or c) some other option every 9 minutes so that I stay logged in. I am on my personal laptop in an office with a door, so I'm not too worried about needing to be logged out. I have written a small python script to ping an ip and then sleep for 9 minutes, and this works dandy, but I would like something that I could include in my startup applications. I don't know if this means I need to compile something into an exe, or can I add this python script to startup apps? A: Assuming you use Windows, you can add a bat file containing the python run command in the Startup folder. Example keeploggedin.bat C:\Steve\Projects\Python> python pytest.py A: You can also use the Scheduled Tasks feature (on the Control Panel) to run it at startup, or you can change your script to ping the IP and exit, and scheduled it to run every 9 minutes. You have nice settings there, for example, you can stop running it at night, so you'll still log out. You might still need the bat file though, I don't know about Python. In fact, if you need just a simple ping you can scheduled ping.exe. A: Pinging an IP will not likely keep your session from timing out. You will likely need to do an HTTP GET and include the session cookie supplied by the server to your browser when you login. Your script may be able to read the cookie from your browser's cookies folder after you have logged in via the browser. Also, the web page may have javascript that calls the logout page when it times out. You may be able to use codemonkey to disable this behavior. A: wrap the calling of your python app in a .bat file and put a shortcut to that .bat file in startup.
automatic keystroke to stay logged in
I have a web based email application that logs me out after 10 minutes of inactivity ("For security reasons"). I would like to write something that either a) imitates a keystroke b) pings an ip or c) some other option every 9 minutes so that I stay logged in. I am on my personal laptop in an office with a door, so I'm not too worried about needing to be logged out. I have written a small python script to ping an ip and then sleep for 9 minutes, and this works dandy, but I would like something that I could include in my startup applications. I don't know if this means I need to compile something into an exe, or can I add this python script to startup apps?
[ "Assuming you use Windows, you can add a bat file containing the python run command in the Startup folder.\nExample keeploggedin.bat\nC:\\Steve\\Projects\\Python> python pytest.py\n\n", "You can also use the Scheduled Tasks feature (on the Control Panel) to run it at startup, or you can change your script to ping the IP and exit, and scheduled it to run every 9 minutes. You have nice settings there, for example, you can stop running it at night, so you'll still log out.\nYou might still need the bat file though, I don't know about Python. \nIn fact, if you need just a simple ping you can scheduled ping.exe.\n", "Pinging an IP will not likely keep your session from timing out.\nYou will likely need to do an HTTP GET and include the session cookie supplied by the server to your browser when you login. Your script may be able to read the cookie from your browser's cookies folder after you have logged in via the browser.\nAlso, the web page may have javascript that calls the logout page when it times out. You may be able to use codemonkey to disable this behavior.\n", "wrap the calling of your python app in a .bat file and put a shortcut to that .bat file in startup. \n" ]
[ 3, 3, 2, 1 ]
[]
[]
[ "authentication", "python" ]
stackoverflow_0000969849_authentication_python.txt
Q: Django conditional aggregation Does anyone know of how I would, through the django ORM, produce a query that conditionally aggregated related models? Let's say, for example, that you run a site that sells stuff, and you want to know how much each employee has sold in the last seven days. It's simple enough to do this over all sales: q = Employee.objects.filter(type='salesman').annotate(total_sales = models.Sum('sale__total')) assuming Employee and Sale models with a many-to-many relationship between them. OK, but now how would I go about constraining this to all sales for the last seven days (or any arbitrary time frame)? Does anyone know? A: Alright, I guess I didn't think this through very far. I didn't realize that filter handled things with a left join (though thinking on it, how else would it map to the db?), so the obvious answer is: Employee.objects.filter(type='salesman').filter(sale__timestamp__gte = start_date)\ .exclude(sale__timestamp__gte = end_date).annotate(...
Django conditional aggregation
Does anyone know of how I would, through the django ORM, produce a query that conditionally aggregated related models? Let's say, for example, that you run a site that sells stuff, and you want to know how much each employee has sold in the last seven days. It's simple enough to do this over all sales: q = Employee.objects.filter(type='salesman').annotate(total_sales = models.Sum('sale__total')) assuming Employee and Sale models with a many-to-many relationship between them. OK, but now how would I go about constraining this to all sales for the last seven days (or any arbitrary time frame)? Does anyone know?
[ "Alright, I guess I didn't think this through very far. I didn't realize that filter handled things with a left join (though thinking on it, how else would it map to the db?), so the obvious answer is:\nEmployee.objects.filter(type='salesman').filter(sale__timestamp__gte = start_date)\\\n .exclude(sale__timestamp__gte = end_date).annotate(...\n\n" ]
[ 2 ]
[]
[]
[ "aggregation", "conditional", "database", "django", "python" ]
stackoverflow_0000971695_aggregation_conditional_database_django_python.txt
Q: How can I create a variable that is scoped to a single request in app engine? I'm creating a python app for google app engine and I've got a performance problem with some expensive operations that are repetitive within a single request. To help deal with this I'd like to create a sort of mini-cache that's scoped to a single request. This is as opposed to a session-wide or application-wide cache, neither of which would make sense for my particular problem. I thought I could just use a python global or module-level variable for this, but it turns out that those maintain their state between requests in non-obvious ways. I also don't think memcache makes sense because it's application wide. I haven't been able to find a good answer for this in google's docs. Maybe that's because it's either a dumb idea or totally obvious, but it seems like it'd be useful and I'm stumped. Anybody have any ideas? A: What I usually do is just create a new attribute on the request object. However, I use django with AppEngine, so I'm not sure if there is anything different about the appengine webapp framework. def view_handler(request): if hasattr(request, 'mycache'): request.mycache['counter'] += 1 else: request.mycache = {'counter':1,} return HttpResponse("counter="+str(request.mycache["counter"])) A: If you're using the 'webapp' framework included with App Engine (or, actually, most other WSGI-baesd frameworks), a new RequestHandler is instantiated for each request. Thus, you can use class variables on your handler class to store per-request data. A: Module variables may (or may not) persist between requests (the same app instance may or may not stay alive between requests), but you can explicitly clear them (del, or set to None, say) at the start of your handling a request, or when you know you're done with one. At worst (if your code is peculiarly organized) you need to set some function to always execute at every request start, or at every request end. A: use local list to store data and do a model.put at end of your request processing. save multiple db trips
How can I create a variable that is scoped to a single request in app engine?
I'm creating a python app for google app engine and I've got a performance problem with some expensive operations that are repetitive within a single request. To help deal with this I'd like to create a sort of mini-cache that's scoped to a single request. This is as opposed to a session-wide or application-wide cache, neither of which would make sense for my particular problem. I thought I could just use a python global or module-level variable for this, but it turns out that those maintain their state between requests in non-obvious ways. I also don't think memcache makes sense because it's application wide. I haven't been able to find a good answer for this in google's docs. Maybe that's because it's either a dumb idea or totally obvious, but it seems like it'd be useful and I'm stumped. Anybody have any ideas?
[ "What I usually do is just create a new attribute on the request object. However, I use django with AppEngine, so I'm not sure if there is anything different about the appengine webapp framework.\ndef view_handler(request):\n if hasattr(request, 'mycache'):\n request.mycache['counter'] += 1\n else:\n request.mycache = {'counter':1,}\n\n return HttpResponse(\"counter=\"+str(request.mycache[\"counter\"]))\n\n", "If you're using the 'webapp' framework included with App Engine (or, actually, most other WSGI-baesd frameworks), a new RequestHandler is instantiated for each request. Thus, you can use class variables on your handler class to store per-request data.\n", "Module variables may (or may not) persist between requests (the same app instance may or may not stay alive between requests), but you can explicitly clear them (del, or set to None, say) at the start of your handling a request, or when you know you're done with one. At worst (if your code is peculiarly organized) you need to set some function to always execute at every request start, or at every request end.\n", "use local list to store data and do a model.put at end of your request processing. save multiple db trips\n" ]
[ 2, 2, 1, 0 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0000963080_google_app_engine_python.txt
Q: GQL does not work for GET paramters for keys I am trying to compare the key to filter results in GQL in Python but the direct comparison nor typecasting to int works. Therefore, I am forced to make a work around as mentioned in the uncommented lines below. Any clues? row = self.request.get("selectedrow") #mydbobject = DbModel.gql("WHERE key=:1", row).fetch(1) #mydbobject = DbModel.gql("WHERE key=:1", int(row)).fetch(1)#invalid literal for int() with base 10 #print mydbobject,row que = db.Query(DbModel) results = que.fetch(100) mydbobject = None for item in results: if item.key().__str__() in row: mydbobject = item EDIT1- one more attempt that does not retrieve the record, the key exists in the Datastore along with the record mydbobject = DbModel.gql("WHERE key = KEY('%s')"%row).fetch(1) A: Am I correct in my assumption that you're basically just want to retrieve an object with a particular key? If so, the get and get_by_id methods may be of help: mydbobject = DbModel.get_by_id(int(self.request.get("selectedrow"))) A: The error "invalid literal for int()" indicate that the paramater pass to int was not a string representing an integer. Try to print the value of "row" for debuging, I bet it is an empty string. The correct way to retrieve an element from the key is simply by using the method "get" or "get_by_id". In your case: row = self.request.get("selectedrow") mydbobject = DbModel.get(row)
GQL does not work for GET paramters for keys
I am trying to compare the key to filter results in GQL in Python but the direct comparison nor typecasting to int works. Therefore, I am forced to make a work around as mentioned in the uncommented lines below. Any clues? row = self.request.get("selectedrow") #mydbobject = DbModel.gql("WHERE key=:1", row).fetch(1) #mydbobject = DbModel.gql("WHERE key=:1", int(row)).fetch(1)#invalid literal for int() with base 10 #print mydbobject,row que = db.Query(DbModel) results = que.fetch(100) mydbobject = None for item in results: if item.key().__str__() in row: mydbobject = item EDIT1- one more attempt that does not retrieve the record, the key exists in the Datastore along with the record mydbobject = DbModel.gql("WHERE key = KEY('%s')"%row).fetch(1)
[ "Am I correct in my assumption that you're basically just want to retrieve an object with a particular key? If so, the get and get_by_id methods may be of help:\nmydbobject = DbModel.get_by_id(int(self.request.get(\"selectedrow\")))\n\n", "The error \"invalid literal for int()\" indicate that the paramater pass to int was not a string representing an integer. Try to print the value of \"row\" for debuging, I bet it is an empty string.\nThe correct way to retrieve an element from the key is simply by using the method \"get\" or \"get_by_id\".\nIn your case:\nrow = self.request.get(\"selectedrow\")\nmydbobject = DbModel.get(row)\n\n" ]
[ 1, 0 ]
[]
[]
[ "google_app_engine", "gqlquery", "python" ]
stackoverflow_0000971153_google_app_engine_gqlquery_python.txt
Q: About GUI editor that would be compatible with Python 3.0 I would like to start learning Python (zero past experience). I am a bit inclined to start with Python 3.0. However, I am not sure if at this time there exists a GUI editor that would be compatible with Python 3.0. I've tried installing Glade, but the one I've got works only with Python 2.5. What could I possibly use with Python 3.0? Any suggestions are welcomed. Thanks! A: There are many useful libraries (not to mention educational material, cookbook snippets, etc.) that have yet to be ported to Python 3.0, so I recommend using Python 2.x for now (where, currently, 5 <= x <= 6). Doubly so if you're a beginner to Python. Triply so if you're actually planning on releasing some software--many systems do not ship with Python 3.0. Python 3.0 is not radically different from the Python 2.x series; what you learn in Python 2 will very much still apply to Python 3. Searching Python 3.0 here on SO reveals many threads in which the majority declare that they're not moving to Python 3.0 anytime soon. A: If your looking for a GUI editor, have a look at these: wxFormBuilder can generate .XRC files for wxpython. XRCed ships with wxpython and could do this as well. .XRC files are xml file which describes your GUI and are not language specific. You could load these files from a Python 2.6 and a Python 3.0 without any change. WxPython is currently available only for Python 2.6 though. I would not worry too much about converting a python 2.6 to 3.0. This is the matter of a few lines to change. The problem with the .XRC approach is that you still need to glue these xml files with Python code. If you're a beginner this might be as tricky as writing the GUI by hand. That's the problem with WxPython anyway: I don't know any real good editor for it, have a look at this discussion A: Just use whichever editor you are most comfortable with. The "leading space as logic" and duck typing mean that there is a limited amount of syntax checking and re-factoring an editor can reasonably do (or is required!) with python source. If you dont have a favorite editor then just use the "idle" editor which comes with most python distributions. If you were talking about a GUI design tool then you need to choose among the several supported GUIs (Tk, WxWindows etc. etc.) before you choose your design tool. A: The only GUI toolkit currently available in Python3.0 is Tkinter, and I don't think there are any Python3.0 GUI-builders available yet. A: WING 3.2 beta work in python 3. ricnar A: PyQt 4.5 (released a couple of days ago) added support for Python 3 That doesn't of course devaluate gotgenes answer: most of the 3rd party libraries are just not ready yet for Python 3.
About GUI editor that would be compatible with Python 3.0
I would like to start learning Python (zero past experience). I am a bit inclined to start with Python 3.0. However, I am not sure if at this time there exists a GUI editor that would be compatible with Python 3.0. I've tried installing Glade, but the one I've got works only with Python 2.5. What could I possibly use with Python 3.0? Any suggestions are welcomed. Thanks!
[ "There are many useful libraries (not to mention educational material, cookbook snippets, etc.) that have yet to be ported to Python 3.0, so I recommend using Python 2.x for now (where, currently, 5 <= x <= 6). Doubly so if you're a beginner to Python. Triply so if you're actually planning on releasing some software--many systems do not ship with Python 3.0.\nPython 3.0 is not radically different from the Python 2.x series; what you learn in Python 2 will very much still apply to Python 3. Searching Python 3.0 here on SO reveals many threads in which the majority declare that they're not moving to Python 3.0 anytime soon.\n", "If your looking for a GUI editor, have a look at these:\n\nwxFormBuilder can generate .XRC files for wxpython.\nXRCed ships with wxpython and could do this as well.\n\n.XRC files are xml file which describes your GUI and are not language specific. You could load these files from a Python 2.6 and a Python 3.0 without any change.\nWxPython is currently available only for Python 2.6 though. I would not worry too much about converting a python 2.6 to 3.0. This is the matter of a few lines to change.\nThe problem with the .XRC approach is that you still need to glue these xml files with Python code. If you're a beginner this might be as tricky as writing the GUI by hand. That's the problem with WxPython anyway: I don't know any real good editor for it, have a look at this discussion\n", "Just use whichever editor you are most comfortable with.\nThe \"leading space as logic\" and duck typing mean that there is a limited amount of syntax checking and re-factoring an editor can reasonably do (or is required!) with python source.\nIf you dont have a favorite editor then just use the \"idle\" editor which comes with most python distributions.\nIf you were talking about a GUI design tool then you need to choose among the several supported GUIs (Tk, WxWindows etc. etc.) before you choose your design tool.\n", "The only GUI toolkit currently available in Python3.0 is Tkinter, and I don't think there are any Python3.0 GUI-builders available yet.\n", "WING 3.2 beta work in python 3.\nricnar\n", "PyQt 4.5 (released a couple of days ago) added support for Python 3\nThat doesn't of course devaluate gotgenes answer: most of the 3rd party libraries are just not ready yet for Python 3.\n" ]
[ 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "python", "python_3.x", "user_interface" ]
stackoverflow_0000800769_python_python_3.x_user_interface.txt
Q: spawning process from python im spawning a script that runs for a long time from a web app like this: os.spawnle(os.P_NOWAIT, "../bin/producenotify.py", "producenotify.py", "xx",os.environ) the script is spawned successfully and it runs, but till it gets over i am not able to free the port that is used by the web app, or in other words i am not able to restart the web app. how do i spawn off a process and make it completely independent of the web app? this is on linux os. A: As @mark clarified it's a Linux system, the script could easily make itself fully independent, i.e., a daemon, by following this recipe. (You could also do it in the parent after an os.fork and only then os.exec... the child process). Edit: to clarify some details wrt @mark's comment on my answer: super-user privileges are not needed to "daemonize" a process as per the cookbook recipes, nor is there any need to change the current working directory (though the code in the recipe does do that and more, that's not the crucial part -- rather it's the proper logic sequence of fork, _exit and setsid calls). The various os.exec... variants that do not end in e use the parent process's environment, so that part is easy too -- see Python online docs. To address suggestions made in others' comments and answers: I believe subprocess and multiprocessing per se don't daemonize the child process, which seems to be what @mark needs; the script could do it for itself, but since some code has to be doing forks and setsid, it seems neater to me to keep all of the spawning on that low-level plane rather than mix some high-level and some low-level code in the course of the operation. Here's a vastly reduced and simplified version of the recipe at the above URL, tailored to be called in the parent to spawn a daemon child -- this way, the code can be used to execute non-Python executables just as well. As given, the code should meet the needs @mark explained, of course it can be tailored in many ways -- I strongly recommend reading the original recipe and its comments and discussions, as well as the books it recommends, for more information. import os import sys def spawnDaemon(path_to_executable, *args) """Spawn a completely detached subprocess (i.e., a daemon). E.g. for mark: spawnDaemon("../bin/producenotify.py", "producenotify.py", "xx") """ # fork the first time (to make a non-session-leader child process) try: pid = os.fork() except OSError, e: raise RuntimeError("1st fork failed: %s [%d]" % (e.strerror, e.errno)) if pid != 0: # parent (calling) process is all done return # detach from controlling terminal (to make child a session-leader) os.setsid() try: pid = os.fork() except OSError, e: raise RuntimeError("2nd fork failed: %s [%d]" % (e.strerror, e.errno)) raise Exception, "%s [%d]" % (e.strerror, e.errno) if pid != 0: # child process is all done os._exit(0) # grandchild process now non-session-leader, detached from parent # grandchild process must now close all open files try: maxfd = os.sysconf("SC_OPEN_MAX") except (AttributeError, ValueError): maxfd = 1024 for fd in range(maxfd): try: os.close(fd) except OSError: # ERROR, fd wasn't open to begin with (ignored) pass # redirect stdin, stdout and stderr to /dev/null os.open(os.devnull, os.O_RDWR) # standard input (0) os.dup2(0, 1) os.dup2(0, 2) # and finally let's execute the executable for the daemon! try: os.execv(path_to_executable, args) except Exception, e: # oops, we're cut off from the world, let's just give up os._exit(255) A: You can use the multiprocessing library to spawn processes. A basic example is shown here: from multiprocessing import Process def f(name): print 'hello', name if __name__ == '__main__': p = Process(target=f, args=('bob',)) p.start() p.join()
spawning process from python
im spawning a script that runs for a long time from a web app like this: os.spawnle(os.P_NOWAIT, "../bin/producenotify.py", "producenotify.py", "xx",os.environ) the script is spawned successfully and it runs, but till it gets over i am not able to free the port that is used by the web app, or in other words i am not able to restart the web app. how do i spawn off a process and make it completely independent of the web app? this is on linux os.
[ "As @mark clarified it's a Linux system, the script could easily make itself fully independent, i.e., a daemon, by following this recipe. (You could also do it in the parent after an os.fork and only then os.exec... the child process).\nEdit: to clarify some details wrt @mark's comment on my answer: super-user privileges are not needed to \"daemonize\" a process as per the cookbook recipes, nor is there any need to change the current working directory (though the code in the recipe does do that and more, that's not the crucial part -- rather it's the proper logic sequence of fork, _exit and setsid calls). The various os.exec... variants that do not end in e use the parent process's environment, so that part is easy too -- see Python online docs.\nTo address suggestions made in others' comments and answers: I believe subprocess and multiprocessing per se don't daemonize the child process, which seems to be what @mark needs; the script could do it for itself, but since some code has to be doing forks and setsid, it seems neater to me to keep all of the spawning on that low-level plane rather than mix some high-level and some low-level code in the course of the operation.\nHere's a vastly reduced and simplified version of the recipe at the above URL, tailored to be called in the parent to spawn a daemon child -- this way, the code can be used to execute non-Python executables just as well. As given, the code should meet the needs @mark explained, of course it can be tailored in many ways -- I strongly recommend reading the original recipe and its comments and discussions, as well as the books it recommends, for more information.\nimport os\nimport sys\n\ndef spawnDaemon(path_to_executable, *args)\n \"\"\"Spawn a completely detached subprocess (i.e., a daemon).\n\n E.g. for mark:\n spawnDaemon(\"../bin/producenotify.py\", \"producenotify.py\", \"xx\")\n \"\"\"\n # fork the first time (to make a non-session-leader child process)\n try:\n pid = os.fork()\n except OSError, e:\n raise RuntimeError(\"1st fork failed: %s [%d]\" % (e.strerror, e.errno))\n if pid != 0:\n # parent (calling) process is all done\n return\n\n # detach from controlling terminal (to make child a session-leader)\n os.setsid()\n try:\n pid = os.fork()\n except OSError, e:\n raise RuntimeError(\"2nd fork failed: %s [%d]\" % (e.strerror, e.errno))\n raise Exception, \"%s [%d]\" % (e.strerror, e.errno)\n if pid != 0:\n # child process is all done\n os._exit(0)\n\n # grandchild process now non-session-leader, detached from parent\n # grandchild process must now close all open files\n try:\n maxfd = os.sysconf(\"SC_OPEN_MAX\")\n except (AttributeError, ValueError):\n maxfd = 1024\n\n for fd in range(maxfd):\n try:\n os.close(fd)\n except OSError: # ERROR, fd wasn't open to begin with (ignored)\n pass\n\n # redirect stdin, stdout and stderr to /dev/null\n os.open(os.devnull, os.O_RDWR) # standard input (0)\n os.dup2(0, 1)\n os.dup2(0, 2)\n\n # and finally let's execute the executable for the daemon!\n try:\n os.execv(path_to_executable, args)\n except Exception, e:\n # oops, we're cut off from the world, let's just give up\n os._exit(255)\n\n", "You can use the multiprocessing library to spawn processes. A basic example is shown here:\nfrom multiprocessing import Process\n\ndef f(name):\n print 'hello', name\n\nif __name__ == '__main__':\n p = Process(target=f, args=('bob',))\n p.start()\n p.join()\n\n" ]
[ 26, 12 ]
[]
[]
[ "process", "python", "spawn" ]
stackoverflow_0000972362_process_python_spawn.txt
Q: Running a set of Python scripts in a list I am working on a Python project that includes a lot of simple example scripts to help new users get used to the system. As well as the source code for each example, I include the output I get on my test machine so users know what to expect when all goes well. It occured to me that I could use this as a crude form of unit testing. Automatically run all the example scripts and do a load of diffs against the expected output. All of my example scripts end with extension .py so I can get their filenames easily enough with something like pythonfiles=[filename for filename in os.listdir(source_directory) if filename[-3:]=='.py'] So, pythonfiles contains something like ['example1.py', 'cool_example.py'] and so on. What syntax can I use to actually run the scripts referenced in this list? A: You could leverage doctest to help you get this done. Write a method that executes each script, and in the docstring for each method you paste the expected output: def run_example1(): """ This is example number 1. Running it should give you the following output: >>> run_example1() "This is the output from example1.py" """ os.system('python example1.py') # or you could use subprocess here if __name__ == "__main__": import doctest doctest.testmod() Note I haven't tested this. Alternatively, as Shane mentioned, you could use subprocess. Something like this will work: import subprocess cmd = ('example1.py', 'any', 'more', 'arguments') expected_out = """Your expected output of the script""" exampleP = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = exampleP.communicate() # out and err are stdout and stderr, respectively if out != expected_out: print "Output does not match" A: You want to use the subprocess module. A: If they are similarly structured (All are executed with a run function for example), you can import the them as python scripts, and call thier run function. import sys import os import imp pythonfiles = [filename for filename in os.listdir(source_directory) if filename[-3:]=='.py'] for py_file in pythonfiles: mod_name = os.path.splitext(py_file)[0] py_filepath = os.path.join(source_directory, py_file) py_mod = imp.load_source(mod_name, py_filepath) if hasattr(py_mod, "run"): py_mod.run() else: print '%s has no "run"' % (py_filepath)
Running a set of Python scripts in a list
I am working on a Python project that includes a lot of simple example scripts to help new users get used to the system. As well as the source code for each example, I include the output I get on my test machine so users know what to expect when all goes well. It occured to me that I could use this as a crude form of unit testing. Automatically run all the example scripts and do a load of diffs against the expected output. All of my example scripts end with extension .py so I can get their filenames easily enough with something like pythonfiles=[filename for filename in os.listdir(source_directory) if filename[-3:]=='.py'] So, pythonfiles contains something like ['example1.py', 'cool_example.py'] and so on. What syntax can I use to actually run the scripts referenced in this list?
[ "You could leverage doctest to help you get this done. Write a method that executes each script, and in the docstring for each method you paste the expected output:\ndef run_example1():\n \"\"\"\n This is example number 1. Running it should give you the following output:\n\n >>> run_example1()\n \"This is the output from example1.py\"\n \"\"\"\n\n os.system('python example1.py') # or you could use subprocess here\n\nif __name__ == \"__main__\":\n import doctest\n doctest.testmod()\n\nNote I haven't tested this.\nAlternatively, as Shane mentioned, you could use subprocess. Something like this will work:\nimport subprocess\n\ncmd = ('example1.py', 'any', 'more', 'arguments')\n\nexpected_out = \"\"\"Your expected output of the script\"\"\"\n\nexampleP = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\nout, err = exampleP.communicate() # out and err are stdout and stderr, respectively\n\nif out != expected_out:\n print \"Output does not match\"\n\n", "You want to use the subprocess module.\n", "If they are similarly structured (All are executed with a run function for example), you can import the them as python scripts, and call thier run function.\nimport sys\nimport os\nimport imp\n\npythonfiles = [filename for filename in os.listdir(source_directory) if filename[-3:]=='.py']\nfor py_file in pythonfiles:\n mod_name = os.path.splitext(py_file)[0]\n py_filepath = os.path.join(source_directory, py_file)\n py_mod = imp.load_source(mod_name, py_filepath)\n if hasattr(py_mod, \"run\"):\n py_mod.run()\n else:\n print '%s has no \"run\"' % (py_filepath)\n\n" ]
[ 8, 4, 3 ]
[]
[]
[ "python" ]
stackoverflow_0000973231_python.txt
Q: Python regular expression for multiple tags I would like to know how to retrieve all results from each <p> tag. import re htmlText = '<p data="5" size="4">item1</p><p size="4">item2</p><p size="4">item3</p>' print re.match('<p[^>]*size="[0-9]">(.*?)</p>', htmlText).groups() result: ('item1', ) what I need: ('item1', 'item2', 'item3') A: For this type of problem, it is recommended to use a DOM parser, not regex. I've seen Beautiful Soup frequently recommended for Python A: Beautiful soup is definitely the way to go with a problem like this. The code is cleaner and easier to read. Once you have it installed, getting all the tags looks something like this. from BeautifulSoup import BeautifulSoup import urllib2 def getTags(tag): f = urllib2.urlopen("http://cnn.com") soup = BeautifulSoup(f.read()) return soup.findAll(tag) if __name__ == '__main__': tags = getTags('p') for tag in tags: print(tag.contents) This will print out all the values of the p tags. A: The regex answer is extremely fragile. Here's proof (and a working BeautifulSoup example). from BeautifulSoup import BeautifulSoup # Here's your HTML html = '<p data="5" size="4">item1</p><p size="4">item2</p><p size="4">item3</p>' # Here's some simple HTML that breaks your accepted # answer, but doesn't break BeautifulSoup. # For each example, the regex will ignore the first <p> tag. html2 = '<p size="4" data="5">item1</p><p size="4">item2</p><p size="4">item3</p>' html3 = '<p data="5" size="4" >item1</p><p size="4">item2</p><p size="4">item3</p>' html4 = '<p data="5" size="12">item1</p><p size="4">item2</p><p size="4">item3</p>' # This BeautifulSoup code works for all the examples. paragraphs = BeautifulSoup(html).findAll('p') items = [''.join(p.findAll(text=True)) for p in paragraphs] Use BeautifulSoup. A: You can use re.findall like this: import re html = '<p data="5" size="4">item1</p><p size="4">item2</p><p size="4">item3</p>' print re.findall('<p[^>]*size="[0-9]">(.*?)</p>', html) # This prints: ['item1', 'item2', 'item3'] Edit: ...but as the many commenters have pointed out, using regular expressions to parse HTML is usually a bad idea. A: Alternatively, xml.dom.minidom will parse your HTML if, ...it is wellformed ...you embed it in a single root element. E.g., >>> import xml.dom.minidom >>> htmlText = '<p data="5" size="4">item1</p><p size="4">item2</p><p size="4">item3</p>' >>> d = xml.dom.minidom.parseString('<not_p>%s</not_p>' % htmlText) >>> tuple(map(lambda e: e.firstChild.wholeText, d.firstChild.childNodes)) ('item1', 'item2', 'item3')
Python regular expression for multiple tags
I would like to know how to retrieve all results from each <p> tag. import re htmlText = '<p data="5" size="4">item1</p><p size="4">item2</p><p size="4">item3</p>' print re.match('<p[^>]*size="[0-9]">(.*?)</p>', htmlText).groups() result: ('item1', ) what I need: ('item1', 'item2', 'item3')
[ "For this type of problem, it is recommended to use a DOM parser, not regex.\nI've seen Beautiful Soup frequently recommended for Python\n", "Beautiful soup is definitely the way to go with a problem like this. The code is cleaner and easier to read. Once you have it installed, getting all the tags looks something like this.\nfrom BeautifulSoup import BeautifulSoup\nimport urllib2\n\ndef getTags(tag):\n f = urllib2.urlopen(\"http://cnn.com\")\n soup = BeautifulSoup(f.read())\n return soup.findAll(tag)\n\n\nif __name__ == '__main__':\n tags = getTags('p')\n for tag in tags: print(tag.contents)\n\nThis will print out all the values of the p tags.\n", "The regex answer is extremely fragile. Here's proof (and a working BeautifulSoup example).\nfrom BeautifulSoup import BeautifulSoup\n\n# Here's your HTML\nhtml = '<p data=\"5\" size=\"4\">item1</p><p size=\"4\">item2</p><p size=\"4\">item3</p>'\n\n# Here's some simple HTML that breaks your accepted \n# answer, but doesn't break BeautifulSoup.\n# For each example, the regex will ignore the first <p> tag.\nhtml2 = '<p size=\"4\" data=\"5\">item1</p><p size=\"4\">item2</p><p size=\"4\">item3</p>'\nhtml3 = '<p data=\"5\" size=\"4\" >item1</p><p size=\"4\">item2</p><p size=\"4\">item3</p>'\nhtml4 = '<p data=\"5\" size=\"12\">item1</p><p size=\"4\">item2</p><p size=\"4\">item3</p>'\n\n# This BeautifulSoup code works for all the examples.\nparagraphs = BeautifulSoup(html).findAll('p')\nitems = [''.join(p.findAll(text=True)) for p in paragraphs]\n\nUse BeautifulSoup.\n", "You can use re.findall like this:\nimport re\nhtml = '<p data=\"5\" size=\"4\">item1</p><p size=\"4\">item2</p><p size=\"4\">item3</p>'\nprint re.findall('<p[^>]*size=\"[0-9]\">(.*?)</p>', html)\n# This prints: ['item1', 'item2', 'item3']\n\nEdit: ...but as the many commenters have pointed out, using regular expressions to parse HTML is usually a bad idea.\n", "Alternatively, xml.dom.minidom will parse your HTML if,\n\n...it is wellformed\n...you embed it in a single root element.\n\nE.g.,\n>>> import xml.dom.minidom\n>>> htmlText = '<p data=\"5\" size=\"4\">item1</p><p size=\"4\">item2</p><p size=\"4\">item3</p>'\n>>> d = xml.dom.minidom.parseString('<not_p>%s</not_p>' % htmlText)\n>>> tuple(map(lambda e: e.firstChild.wholeText, d.firstChild.childNodes))\n('item1', 'item2', 'item3')\n\n" ]
[ 11, 5, 5, 2, 2 ]
[]
[]
[ "html", "python", "regex" ]
stackoverflow_0000972749_html_python_regex.txt
Q: writing to a file via FTP in python So i've followed the docs on this page: http://docs.python.org/library/ftplib.html#ftplib.FTP.retrbinary And maybe i'm confused just as to what 'retrbinary' does...i'm thinking it retrives a binary file and from there i can open it and write out to that file. here's the line that is giving me problems... ftp.retrbinary('RETR temp.txt',open('temp.txt','wb').write) what i don't understand is i'd like to write out to temp.txt, so i was trying ftp.retrbinary('RETR temp.txt',open('temp.txt','wb').write('some new txt')) but i was getting errors, i'm able to make a FTP connection, do pwd(), cwd(), rename(), etc. p.s. i'm trying to google this as much as possible, thanks! A: It looks like the original code should have worked, if you were trying to download a file from the server. The retrbinary command accepts a function object you specify (that is, the name of the function with no () after it); it is called whenever a piece of data (a binary file) arrives. In this case, it will call the write method of the file you opened. This is slightly different than retrlines, because retrlines will assume the data is a text file, and will convert newline characters appropriately (but corrupt, say, images). With further reading it looks like you're trying to write to a file on the server. In that case, you'll need to pass a file object (or some other object with a read method that behaves like a file) to be called by the store function: ftp.storbinary("STOR test.txt", open("file_on_my_computer.txt", "rb")) A: ftp.retrbinary takes second argument as callback function it can be directly write method of file object i.e.open('temp.txt','wb').write but instead you are calling write directly you may supply your own callback and do whatever you want to do with data def mywriter(data): print data ftp.retrbinary('RETR temp.txt', mywriter)
writing to a file via FTP in python
So i've followed the docs on this page: http://docs.python.org/library/ftplib.html#ftplib.FTP.retrbinary And maybe i'm confused just as to what 'retrbinary' does...i'm thinking it retrives a binary file and from there i can open it and write out to that file. here's the line that is giving me problems... ftp.retrbinary('RETR temp.txt',open('temp.txt','wb').write) what i don't understand is i'd like to write out to temp.txt, so i was trying ftp.retrbinary('RETR temp.txt',open('temp.txt','wb').write('some new txt')) but i was getting errors, i'm able to make a FTP connection, do pwd(), cwd(), rename(), etc. p.s. i'm trying to google this as much as possible, thanks!
[ "It looks like the original code should have worked, if you were trying to download a file from the server. The retrbinary command accepts a function object you specify (that is, the name of the function with no () after it); it is called whenever a piece of data (a binary file) arrives. In this case, it will call the write method of the file you opened. This is slightly different than retrlines, because retrlines will assume the data is a text file, and will convert newline characters appropriately (but corrupt, say, images).\nWith further reading it looks like you're trying to write to a file on the server. In that case, you'll need to pass a file object (or some other object with a read method that behaves like a file) to be called by the store function:\nftp.storbinary(\"STOR test.txt\", open(\"file_on_my_computer.txt\", \"rb\"))\n\n", "ftp.retrbinary takes second argument as callback function\nit can be directly write method of file object i.e.open('temp.txt','wb').write\nbut instead you are calling write directly\nyou may supply your own callback and do whatever you want to do with data\ndef mywriter(data):\n print data\nftp.retrbinary('RETR temp.txt', mywriter)\n\n" ]
[ 3, 0 ]
[]
[]
[ "ftp", "ftplib", "python" ]
stackoverflow_0000973551_ftp_ftplib_python.txt
Q: concurrently iterating through even and odd items of list I have a list of items (which are HTML table rows, extracted with Beautiful Soup) and I need to iterate over the list and get even and odd elements (I mean index) for each loop run. My code looks like this: for top, bottom in izip(table[::2], table[1::2]): #do something with top #do something else with bottom How to make this code less ugly? Or maybe is it good way to do this? EDIT: table[1::2], table[::2] => table[::2], table[1::2] A: izip is a pretty good option, but here's a few alternatives since you're unhappy with it: >>> def chunker(seq, size): ... return (tuple(seq[pos:pos+size]) for pos in xrange(0, len(seq), size)) ... >>> x = range(11) >>> x [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> chunker(x, 2) <generator object <genexpr> at 0x00B44328> >>> list(chunker(x, 2)) [(0, 1), (2, 3), (4, 5), (6, 7), (8, 9), (10,)] >>> list(izip(x[1::2], x[::2])) [(1, 0), (3, 2), (5, 4), (7, 6), (9, 8)] As you can see, this has the advantage of properly handling an uneven amount of elements, which may or not be important to you. There's also this recipe from the itertools documentation itself: >>> def grouper(n, iterable, fillvalue=None): ... "grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx" ... args = [iter(iterable)] * n ... return izip_longest(fillvalue=fillvalue, *args) ... >>> >>> from itertools import izip_longest >>> list(grouper(2, x)) [(0, 1), (2, 3), (4, 5), (6, 7), (8, 9), (10, None)] A: Try: def alternate(i): i = iter(i) while True: yield(i.next(), i.next()) >>> list(alternate(range(10))) [(0, 1), (2, 3), (4, 5), (6, 7), (8, 9)] This solution works on any sequence, not just lists, and doesn't copy the sequence (it will be far more efficient if you only want the first few elements of a long sequence). A: Looks good. My only suggestion would be to wrap this in a function or method. That way, you can give it a name (evenOddIter()) which makes it much more readable.
concurrently iterating through even and odd items of list
I have a list of items (which are HTML table rows, extracted with Beautiful Soup) and I need to iterate over the list and get even and odd elements (I mean index) for each loop run. My code looks like this: for top, bottom in izip(table[::2], table[1::2]): #do something with top #do something else with bottom How to make this code less ugly? Or maybe is it good way to do this? EDIT: table[1::2], table[::2] => table[::2], table[1::2]
[ "izip is a pretty good option, but here's a few alternatives since you're unhappy with it:\n>>> def chunker(seq, size):\n... return (tuple(seq[pos:pos+size]) for pos in xrange(0, len(seq), size))\n...\n>>> x = range(11)\n>>> x\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n>>> chunker(x, 2)\n<generator object <genexpr> at 0x00B44328>\n>>> list(chunker(x, 2))\n[(0, 1), (2, 3), (4, 5), (6, 7), (8, 9), (10,)]\n>>> list(izip(x[1::2], x[::2]))\n[(1, 0), (3, 2), (5, 4), (7, 6), (9, 8)]\n\nAs you can see, this has the advantage of properly handling an uneven amount of elements, which may or not be important to you. There's also this recipe from the itertools documentation itself:\n>>> def grouper(n, iterable, fillvalue=None):\n... \"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx\"\n... args = [iter(iterable)] * n\n... return izip_longest(fillvalue=fillvalue, *args)\n...\n>>>\n>>> from itertools import izip_longest\n>>> list(grouper(2, x))\n[(0, 1), (2, 3), (4, 5), (6, 7), (8, 9), (10, None)]\n\n", "Try:\ndef alternate(i):\n i = iter(i)\n while True:\n yield(i.next(), i.next())\n\n>>> list(alternate(range(10)))\n[(0, 1), (2, 3), (4, 5), (6, 7), (8, 9)]\n\nThis solution works on any sequence, not just lists, and doesn't copy the sequence (it will be far more efficient if you only want the first few elements of a long sequence).\n", "Looks good. My only suggestion would be to wrap this in a function or method. That way, you can give it a name (evenOddIter()) which makes it much more readable.\n" ]
[ 5, 4, 0 ]
[]
[]
[ "for_loop", "python", "python_itertools" ]
stackoverflow_0000974219_for_loop_python_python_itertools.txt
Q: Syntax Highlighting in Cocoa TextView? Experiences? Suggestions? Ideas? Possible Duplicate: Syntax coloring for Cocoa app I'm interested in syntax highlighting in a Cocoa TextView. I found several resources: approach with flex, via a flex pattern matched against textStorageDidProcessEditing in a TextView delegate. In this approach the whole string get parsed on each input event, hence performance degrades. CocoaDev has an own page on the topic of syntax highlighting: Use NSTextStorageDidProcessEditingNotification, then get the edited range, and just apply the coloring there. The range might be wordboundaries or anything; this definitely improves performance. Mentioned there: Xcode, for example, only colorizes text that's currently on-screen, and defers colorizing the rest of the document until you scroll through it. How would one implement this? Use NSLayoutManager – via Temporary attributes [which] are used only for on-screen drawing and are not persistent in any way... as the docs say, but that doesn't color the last edited range, until a whitespace character is entered. Custom Helper like UKSyntaxColoredDocument – nice, but language definition is done via property list; how to use additional/existing language definitions? None of the approaches seem really extensible or robust to me (except the 4. maybe ..). I am aware of robust existing libraries for SH like pygments; and of PyObjC. Question: How would it be possible to use some existing library e.g. like pygments to have an extensible and performant syntax highlighting in a Cocoa TextView? Note: I know this question is very broad (and much too long). Experiences and suggestions as well as solutions are welcome. Thanks. Found another similar thread on that matter: Syntax coloring for Cocoa app A: I would suggest taking a look at the source code to Smultron. It has very nice syntax highlighting. It uses a subclass of NSTextView to do most of the heavy lifting. The code uses the layout manager to add attributes to the text and uses some other clever tricks to only highlight as much of the document as necessary.
Syntax Highlighting in Cocoa TextView? Experiences? Suggestions? Ideas?
Possible Duplicate: Syntax coloring for Cocoa app I'm interested in syntax highlighting in a Cocoa TextView. I found several resources: approach with flex, via a flex pattern matched against textStorageDidProcessEditing in a TextView delegate. In this approach the whole string get parsed on each input event, hence performance degrades. CocoaDev has an own page on the topic of syntax highlighting: Use NSTextStorageDidProcessEditingNotification, then get the edited range, and just apply the coloring there. The range might be wordboundaries or anything; this definitely improves performance. Mentioned there: Xcode, for example, only colorizes text that's currently on-screen, and defers colorizing the rest of the document until you scroll through it. How would one implement this? Use NSLayoutManager – via Temporary attributes [which] are used only for on-screen drawing and are not persistent in any way... as the docs say, but that doesn't color the last edited range, until a whitespace character is entered. Custom Helper like UKSyntaxColoredDocument – nice, but language definition is done via property list; how to use additional/existing language definitions? None of the approaches seem really extensible or robust to me (except the 4. maybe ..). I am aware of robust existing libraries for SH like pygments; and of PyObjC. Question: How would it be possible to use some existing library e.g. like pygments to have an extensible and performant syntax highlighting in a Cocoa TextView? Note: I know this question is very broad (and much too long). Experiences and suggestions as well as solutions are welcome. Thanks. Found another similar thread on that matter: Syntax coloring for Cocoa app
[ "I would suggest taking a look at the source code to Smultron. It has very nice syntax highlighting. It uses a subclass of NSTextView to do most of the heavy lifting. The code uses the layout manager to add attributes to the text and uses some other clever tricks to only highlight as much of the document as necessary.\n" ]
[ 7 ]
[]
[]
[ "cocoa", "objective_c", "python", "syntax_highlighting", "textview" ]
stackoverflow_0000972812_cocoa_objective_c_python_syntax_highlighting_textview.txt
Q: Cleaning up an internal pysqlite connection on object destruction I have an object with an internal database connection that's active throughout its lifetime. At the end of the program's run, the connection has to be committed and closed. So far I've used an explicit close method, but this is somewhat cumbersome, especially when exceptions can happen in the calling code. I'm considering using the __del__ method for closing, but after some reading online I have concerns. Is this a valid usage pattern? Can I be sure that the internal resources will be freed in __del__ correctly? This discussion raised a similar question but found no satisfactory answer. I don't want to have an explicit close method, and using with isn't an option, because my object isn't used as simply as open-play-close, but is kept as a member of another, larger object, that uses it while running in a GUI. C++ has perfectly working destructors where one can free resources safely, so I would imagine Python has something agreed-upon too. For some reason it seems not to be the case, and many in the community vow against __del__. What's the alternative, then? A: Read up on the with statement. You're describing its use case. You'll need to wrap your connection in a "Context Manager" class that handles the __enter__ and __exit__ methods used by the with statement. See PEP 343 for more information. Edit "my object isn't used as simply as open-play-close, but is kept as a member of another, larger object" class AnObjectWhichMustBeClosed( object ): def __enter__( self ): # acquire def __exit__( self, type, value, traceback ): # release def open( self, dbConnectionInfo ): # open the connection, updating the state for __exit__ to handle. class ALargerObject( object ): def __init__( self ): pass def injectTheObjectThatMustBeClosed( self, anObject ): self.useThis = anObject class MyGuiApp( self ): def run( self ): # build GUI objects large = ALargeObject() with AnObjectWhichMustBeClosed() as x: large.injectTheObjectThatMustBeClosed( x ) mainLoop() Some folks call this "Dependency Injection" and "Inversion of Control". Other folks call this the Strategy pattern. The "ObjectThatMustBeClosed" is a strategy, plugged into some larger object. The assembly is created at a top-level of the GUI app, since that's usually where resources like databases are acquired. A: You can make a connection module, since modules keep the same object in the whole application, and register a function to close it with the atexit module # db.py: import sqlite3 import atexit con = None def get_connection(): global con if not con: con = sqlite3.connect('somedb.sqlite') atexit.register(close_connection, con) return con def close_connection(some_con): some_con.commit() some_con.close() # your_program.py import db con = db.get_connection() cur = con.cursor() cur.execute("SELECT ...") This sugestion is based on the assumption that the connection in your application seems like a single instance (singleton) which a module global provides well. If that's not the case, then you can use a destructor. However destructors don't go well with garbage collectors and circular references (you must remove the circular reference yourself before the destructor is called) and if that's not the case (you need multiple connections) then you can go for a destructor. Just don't keep circular references around or you'll have to break them yourself. Also, what you said about C++ is wrong. If you use destructors in C++ they are called either when the block that defines the object finishes (like python's with) or when you use the delete keyword (that deallocates an object created with new). Outside that you must use an explicit close() that is not the destructor. So it is just like python - python is even "better" because it has a garbage collector.
Cleaning up an internal pysqlite connection on object destruction
I have an object with an internal database connection that's active throughout its lifetime. At the end of the program's run, the connection has to be committed and closed. So far I've used an explicit close method, but this is somewhat cumbersome, especially when exceptions can happen in the calling code. I'm considering using the __del__ method for closing, but after some reading online I have concerns. Is this a valid usage pattern? Can I be sure that the internal resources will be freed in __del__ correctly? This discussion raised a similar question but found no satisfactory answer. I don't want to have an explicit close method, and using with isn't an option, because my object isn't used as simply as open-play-close, but is kept as a member of another, larger object, that uses it while running in a GUI. C++ has perfectly working destructors where one can free resources safely, so I would imagine Python has something agreed-upon too. For some reason it seems not to be the case, and many in the community vow against __del__. What's the alternative, then?
[ "Read up on the with statement. You're describing its use case.\nYou'll need to wrap your connection in a \"Context Manager\" class that handles the __enter__ and __exit__ methods used by the with statement.\nSee PEP 343 for more information.\n\nEdit\n\"my object isn't used as simply as open-play-close, but is kept as a member of another, larger object\"\nclass AnObjectWhichMustBeClosed( object ):\n def __enter__( self ):\n # acquire\n def __exit__( self, type, value, traceback ):\n # release\n def open( self, dbConnectionInfo ):\n # open the connection, updating the state for __exit__ to handle.\n\nclass ALargerObject( object ):\n def __init__( self ):\n pass\n def injectTheObjectThatMustBeClosed( self, anObject ):\n self.useThis = anObject\n\nclass MyGuiApp( self ):\n def run( self ):\n # build GUI objects\n large = ALargeObject()\n with AnObjectWhichMustBeClosed() as x:\n large.injectTheObjectThatMustBeClosed( x )\n mainLoop()\n\nSome folks call this \"Dependency Injection\" and \"Inversion of Control\". Other folks call this the Strategy pattern. The \"ObjectThatMustBeClosed\" is a strategy, plugged into some larger object. The assembly is created at a top-level of the GUI app, since that's usually where resources like databases are acquired.\n", "You can make a connection module, since modules keep the same object in the whole application, and register a function to close it with the atexit module\n# db.py:\nimport sqlite3\nimport atexit\n\ncon = None\n\ndef get_connection():\n global con\n if not con:\n con = sqlite3.connect('somedb.sqlite')\n atexit.register(close_connection, con)\n return con\n\ndef close_connection(some_con):\n some_con.commit()\n some_con.close()\n\n# your_program.py\nimport db\ncon = db.get_connection()\ncur = con.cursor()\ncur.execute(\"SELECT ...\")\n\nThis sugestion is based on the assumption that the connection in your application seems like a single instance (singleton) which a module global provides well.\nIf that's not the case, then you can use a destructor.\nHowever destructors don't go well with garbage collectors and circular references (you must remove the circular reference yourself before the destructor is called) and if that's not the case (you need multiple connections) then you can go for a destructor. Just don't keep circular references around or you'll have to break them yourself.\nAlso, what you said about C++ is wrong. If you use destructors in C++ they are called either when the block that defines the object finishes (like python's with) or when you use the delete keyword (that deallocates an object created with new). Outside that you must use an explicit close() that is not the destructor. So it is just like python - python is even \"better\" because it has a garbage collector.\n" ]
[ 8, 6 ]
[]
[]
[ "destructor", "pysqlite", "python" ]
stackoverflow_0000974813_destructor_pysqlite_python.txt
Q: Multiply operator applied to list(data structure) I'm reading How to think like a computer scientist which is an introductory text for "Python Programming". I want to clarify the behaviour of multiply operator (*) when applied to lists. Consider the function make_matrix def make_matrix(rows, columns): """ >>> make_matrix(4, 2) [[0, 0], [0, 0], [0, 0], [0, 0]] >>> m = make_matrix(4, 2) >>> m[1][1] = 7 >>> m [[0, 0], [0, 7], [0, 0], [0, 0]] """ return [[0] * columns] * rows The actual output is [[0, 7], [0, 7], [0, 7], [0, 7]] The correct version of make_matrix is : def make_matrix(rows, columns): """ >>> make_matrix(3, 5) [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]] >>> make_matrix(4, 2) [[0, 0], [0, 0], [0, 0], [0, 0]] >>> m = make_matrix(4, 2) >>> m[1][1] = 7 >>> m [[0, 0], [0, 7], [0, 0], [0, 0]] """ matrix = [] for row in range(rows): matrix += [[0] * columns] return matrix The reason why first version of make_matrix fails ( as explained in the book at 9.8 ) is that ...each row is an alias of the other rows... I wonder why [[0] * columns] * rows causes ...each row is an alias of the other rows... but not [[0] * columns] i.e. why each [0] in a row is not an alias of other row element. A: EVERYTHING in python are objects, and python never makes copies unless explicity asked to do so. When you do innerList = [0] * 10 you create a list with 10 elements, all of them refering to the same int object 0. Since integer objects are immutable, when you do innerList[1] = 15 You are changing the second element of the list so that it refers to another integer 15. That always works because of int objects immutability. That's why outerList = innerList * 5 Will create a list object with 5 elements, each one is a reference to the same innerList just as above. But since list objects are mutable: outerList[2].append('something') Is the same as: innerList.append('something') Because they are two references to the same list object. So the element ends up in that single list. It appears to be duplicated, but the fact is that there is only one list object, and many references to it. By contrast if you do outerList[1] = outerList[1] + ['something'] Here you are creating another list object (using + with lists is an explicit copy), and assigning a reference to it into the second position of outerList. If you "append" the element this way (not really appending, but creating another list), innerList will be unaffected.
Multiply operator applied to list(data structure)
I'm reading How to think like a computer scientist which is an introductory text for "Python Programming". I want to clarify the behaviour of multiply operator (*) when applied to lists. Consider the function make_matrix def make_matrix(rows, columns): """ >>> make_matrix(4, 2) [[0, 0], [0, 0], [0, 0], [0, 0]] >>> m = make_matrix(4, 2) >>> m[1][1] = 7 >>> m [[0, 0], [0, 7], [0, 0], [0, 0]] """ return [[0] * columns] * rows The actual output is [[0, 7], [0, 7], [0, 7], [0, 7]] The correct version of make_matrix is : def make_matrix(rows, columns): """ >>> make_matrix(3, 5) [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]] >>> make_matrix(4, 2) [[0, 0], [0, 0], [0, 0], [0, 0]] >>> m = make_matrix(4, 2) >>> m[1][1] = 7 >>> m [[0, 0], [0, 7], [0, 0], [0, 0]] """ matrix = [] for row in range(rows): matrix += [[0] * columns] return matrix The reason why first version of make_matrix fails ( as explained in the book at 9.8 ) is that ...each row is an alias of the other rows... I wonder why [[0] * columns] * rows causes ...each row is an alias of the other rows... but not [[0] * columns] i.e. why each [0] in a row is not an alias of other row element.
[ "EVERYTHING in python are objects, and python never makes copies unless explicity asked to do so.\nWhen you do \ninnerList = [0] * 10\n\nyou create a list with 10 elements, all of them refering to the same int object 0.\nSince integer objects are immutable, when you do\ninnerList[1] = 15\n\nYou are changing the second element of the list so that it refers to another integer 15. That always works because of int objects immutability.\nThat's why\nouterList = innerList * 5\n\nWill create a list object with 5 elements, each one is a reference to the same innerList just as above. But since list objects are mutable:\nouterList[2].append('something')\n\nIs the same as:\ninnerList.append('something')\n\nBecause they are two references to the same list object. So the element ends up in that single list. It appears to be duplicated, but the fact is that there is only one list object, and many references to it.\nBy contrast if you do\nouterList[1] = outerList[1] + ['something']\n\nHere you are creating another list object (using + with lists is an explicit copy), and assigning a reference to it into the second position of outerList. If you \"append\" the element this way (not really appending, but creating another list), innerList will be unaffected.\n" ]
[ 20 ]
[ "lists are not primitives, they are passed by reference. A copy of a list is a pointer to a list (in C jargon). Anything you do to the list happens to all copies of the list and the copies of its contents unless you do a shallow copy.\n[[0] * columns] * rows\n\nOops, we've just made a big list of pointers to [0]. Change one and you change them all.\nIntegers are not passed by reference, they are really copied, therefore [0] * contents is really making lots of NEW 0's and appending them to the list.\n" ]
[ -4 ]
[ "list", "multiplication", "python", "python_datamodel", "shallow_copy" ]
stackoverflow_0000974931_list_multiplication_python_python_datamodel_shallow_copy.txt
Q: python not starting properly I have installed python and django in my system that uses win vista. Now when I go to command prompt and type python or django-admin.py both are not working. Every time I need to set the path to the python folder manually. But i have seen these commands running even without setting path. So how do i make it to run properly? A: You probably need to add Python to you dos path. Here's a video that may help you out: http://showmedo.com/videotutorials/video?name=960000&fromSeriesID=96 A: you can't run a command that isn't in your path. it should be set globally when you installed python. type 'set' at a dos prompt and look at the PATH variable. c:\python25 (or whever you installed python) has to be in that variable ie PATH=c:\windows;c:\python25;... etc if it isn't in PATH then you or the installer missed the part where you needed to set it. It can be done from the 'Environment Variables' button in the 'System' control panel. A: Either use the system control panel to set the PATH environment variable that applies permanently or Reinstall Python as a system administrator so that the installer can set the registry and environment variables for you. If you install the "just for me" option, then you have to set the PATH variable in the control panel. A: In your path, I think you need to have both the location of the Python install and the Python\Scripts folder. For example, on XP, I have C:\Python25;C:\Python25\Scripts. Can you verify that you have both?
python not starting properly
I have installed python and django in my system that uses win vista. Now when I go to command prompt and type python or django-admin.py both are not working. Every time I need to set the path to the python folder manually. But i have seen these commands running even without setting path. So how do i make it to run properly?
[ "You probably need to add Python to you dos path. Here's a video that may help you out:\nhttp://showmedo.com/videotutorials/video?name=960000&fromSeriesID=96\n", "you can't run a command that isn't in your path. it should be set globally when you installed python.\ntype 'set' at a dos prompt and look at the PATH variable. c:\\python25 (or whever you installed python) has to be in that variable ie PATH=c:\\windows;c:\\python25;... etc \nif it isn't in PATH then you or the installer missed the part where you needed to set it. It can be done from the 'Environment Variables' button in the 'System' control panel.\n", "Either use the system control panel to set the PATH environment variable that applies permanently or\nReinstall Python as a system administrator so that the installer can set the registry and environment variables for you.\nIf you install the \"just for me\" option, then you have to set the PATH variable in the control panel.\n", "In your path, I think you need to have both the location of the Python install and the Python\\Scripts folder. For example, on XP, I have C:\\Python25;C:\\Python25\\Scripts. Can you verify that you have both?\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "django_admin", "python", "windows" ]
stackoverflow_0000974821_django_admin_python_windows.txt
Q: python - match on array return value I want to do a functional like pattern match to get the first two elements, and then the rest of an array return value. For example, assume that perms(x) returns a list of values, and I want to do this: seq=perms(x) a = seq[0] b = seq[1] rest = seq[2:] Of course I can shorten to: [a,b] = seq[0:2] rest = seq[2:] Can I use some notation to do this? [a,b,more] = perms(x) or conceptually: [a,b,more..] = perms(x) PROLOG & functional languages do list decomposition so nicely like this! A: You can do it in Python 3 like this: (a, b, *rest) = seq See the extended iterable unpacking PEP for more details. A: In python 2, your question is very close to an answer already: a, b, more = (seq[0], seq[1], seq[2:]) or: (a, b), more = (seq[0:2], seq[2:]) A: For Python 2, I know you can do it with a function: >>> def getValues(a, b, *more): return a, b, more >>> seq = [1,2,3,4,5] >>> a, b, more = getValues(*seq) >>> a 1 >>> b 2 >>> more (3, 4, 5) But not sure if there's any way of doing it like Ayman's Python 3 suggestion A: Very nice, thanks. The suggestions where one dissects the array on the fight-hand side don't work so well for me, as I actually wanted to pattern match on the returns from a generator expression. for (a, b, more) in perms(seq): ... I like the P3 solution, but have to wait for Komodo to support it!
python - match on array return value
I want to do a functional like pattern match to get the first two elements, and then the rest of an array return value. For example, assume that perms(x) returns a list of values, and I want to do this: seq=perms(x) a = seq[0] b = seq[1] rest = seq[2:] Of course I can shorten to: [a,b] = seq[0:2] rest = seq[2:] Can I use some notation to do this? [a,b,more] = perms(x) or conceptually: [a,b,more..] = perms(x) PROLOG & functional languages do list decomposition so nicely like this!
[ "You can do it in Python 3 like this:\n(a, b, *rest) = seq\n\nSee the extended iterable unpacking PEP for more details.\n", "In python 2, your question is very close to an answer already:\na, b, more = (seq[0], seq[1], seq[2:])\n\nor:\n(a, b), more = (seq[0:2], seq[2:])\n\n", "For Python 2, I know you can do it with a function:\n>>> def getValues(a, b, *more):\n return a, b, more\n\n>>> seq = [1,2,3,4,5]\n>>> a, b, more = getValues(*seq)\n>>> a\n1\n>>> b\n2\n>>> more\n(3, 4, 5)\n\nBut not sure if there's any way of doing it like Ayman's Python 3 suggestion\n", "Very nice, thanks.\nThe suggestions where one dissects the array on the fight-hand side don't work so well for me, as I actually wanted to pattern match on the returns from a generator expression.\nfor (a, b, more) in perms(seq): ...\nI like the P3 solution, but have to wait for Komodo to support it!\n" ]
[ 6, 3, 2, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0000923553_list_python.txt