API, a single connection is opened between your app and the API, with new
results being sent through that connection whenever new matches occur. This
results in a low-latency delivery mechanism that can support very high
-throughput. For futher information, see
+throughput. For further information, see
https://developer.twitter.com/en/docs/tutorials/consuming-streaming-data
Using :class:`Stream`
encountered and ``max_retries``, which defaults to infinite, hasn't been
exceeded yet, the :class:`Stream` instance will attempt to reconnect the stream
after an appropriate amount of time. By default, all three of these methods log
-an error. To customize that handling, they can be overriden in a subclass::
+an error. To customize that handling, they can be overridden in a subclass::
class ConnectionTester(tweepy.Stream):
# Tweet / Update Status
-# The app and the corresponding credentials must have the Write perission
+# The app and the corresponding credentials must have the Write permission
# Check the App permissions section of the Settings tab of your app, under the
# Twitter Developer Portal Projects & Apps page at
#@tape.use_cassette('testfailure.json')
#def testapierror(self):
- # from tweepy.error import TweepError
+ # from tweepy.errors import TweepyException
#
- # with self.assertRaises(TweepError) as cm:
+ # with self.assertRaises(TweepyException) as cm:
# self.api.direct_messages()
#
# reason, = literal_eval(cm.exception.reason)
try:
self.api.user_timeline(user_id=user_id, count=1, include_rts=True)
except HTTPException as e:
- # continue if we're not autherized to access the user's timeline or user doesn't exist anymore
+ # continue if we're not authorized to access the user's timeline or user doesn't exist anymore
if e.response.status_code in (401, 404):
continue
raise e
def count(self):
"""Note: This is not very efficient,
- since it retreives all the keys from the redis
+ since it retrieves all the keys from the redis
server to know how many keys we have"""
return len(self.client.smembers(self.keys_container))
class MongodbCache(Cache):
- """A simple pickle-based MongoDB cache sytem."""
+ """A simple pickle-based MongoDB cache system."""
def __init__(self, db, timeout=3600, collection='tweepy_cache'):
"""Should receive a "database" cursor from pymongo."""
default to [now - 30 seconds].
granularity : str
This is the granularity that you want the timeseries count data to
- be grouped by. You can requeset ``minute``, ``hour``, or ``day``
+ be grouped by. You can request ``minute``, ``hour``, or ``day``
granularity. The default granularity, if not specified is ``hour``.
next_token : str
This parameter is used to get the next 'page' of results. The value
parameter.
granularity : str
This is the granularity that you want the timeseries count data to
- be grouped by. You can requeset ``minute``, ``hour``, or ``day``
+ be grouped by. You can request ``minute``, ``hour``, or ``day``
granularity. The default granularity, if not specified is ``hour``.
since_id : Union[int, str]
Returns results with a Tweet ID greater than (that is, more recent