简·雅各布斯(yane jacobs y)在你附近
In Death and Life of Great American Cities, the great Jane Jacobs lays out four essential characteristics of a great neighborhood:
在《美國大城市的死亡與生活》中 ,偉大的簡·雅各布斯闡述了一個(gè)大社區(qū)的四個(gè)基本特征:
- Density 密度
- A mix of uses 多種用途
- A mix of building ages, types and conditions 混合建筑年齡,類型和條件
- A street network of short, connected blocks 短而相連的街區(qū)的街道網(wǎng)絡(luò)
Of course, she goes into much greater detail on all of these, but I’m not going to get into all the eyes-on-the-street level stuff. Instead, I’m going to find neighborhoods with the right “bones” to build great urbanism onto. The caveat to this, as with most geospatial planning tools, is that it is not to be blindly trusted. There are a lot of details that need on the ground attention.
當(dāng)然,她會(huì)在所有這些方面進(jìn)行更詳細(xì)的介紹,但我不會(huì)涉及所有在街頭關(guān)注的內(nèi)容。 取而代之的是,我將尋找具有正確“骨骼”的街區(qū),以在其上建立偉大的城市主義。 與大多數(shù)地理空間規(guī)劃工具一樣,對(duì)此的警告是,不要盲目地信任它。 有很多需要地面關(guān)注的細(xì)節(jié)。
On to the data.
關(guān)于數(shù)據(jù)。
工具類 (Tools)
For this project, I’m going to use the following import statement:
對(duì)于此項(xiàng)目,我將使用以下import語句:
To start a session with the Census API, you need to give it a key (get one here). I’m also going to start up my OSM tools, define a couple projections, and create some dictionaries for the locations I’m interested in, for convenience:
要開始使用Census API進(jìn)行會(huì)話,您需要為其提供一個(gè)密鑰( 在此處獲取一個(gè))。 為了方便起見,我還將啟動(dòng)OSM工具,定義幾個(gè)投影,并為我感興趣的位置創(chuàng)建一些字典:
census_api_key='50eb4e527e6c123fc8230117b3b526e1055ee8da' nominatim=Nominatim() overpass=Overpass() c=Census(census_api_key) wgs='EPSG:4326' merc='EPSG:3857'ada={'state':'ID','county':'001','name':'Ada County, ID'} king={'state':'WA','county':'033','name':'King County, WA'}These two are interesting because they have both seen significant post-war growth and have a broad spectrum of development patterns. As a former Boise resident, I know Ada County well, and can provide on-the-ground insights. King County has a robust data platform that will allow for a different set of insights in part II of this analysis.
這兩個(gè)很有趣,因?yàn)樗鼈兌荚趹?zhàn)后取得了長足的發(fā)展,并擁有廣泛的發(fā)展模式。 作為博伊西省的前居民,我非常了解Ada縣,并且可以提供實(shí)地見解。 金縣擁有強(qiáng)大的數(shù)據(jù)平臺(tái),該平臺(tái)將在本分析的第二部分中提供不同的見解。
密度 (Density)
We’ll start off easy. The U.S. Census Bureau publishes population estimates regularly, so we just need to put those to some geometry and see how many people live in different areas. The smallest geography available for all the data that I’m going to use is the tract, so that’s what we’ll get.
我們將從簡單開始。 美國人口普查局會(huì)定期發(fā)布人口估算值,因此我們只需要將其估算為某種幾何形狀,并查看有多少人居住在不同地區(qū)。 對(duì)于我將要使用的所有數(shù)據(jù),可用的最小地理區(qū)域是區(qū)域,這就是我們要得到的。
def get_county_tracts(state, county_code):state_shapefile=gpd.read_file(states.lookup(state).shapefile_urls('tract'))county_shapefile=state_shapefile.loc[state_shapefile['COUNTYFP10']==county_code]return county_shapefileNow that I have the geography, I just need to get the population to calculate the density. The Census table for that is ‘B01003_001E,’ obviously. Here’s the function for querying that table by county:
現(xiàn)在我有了地理,我只需要獲取人口來計(jì)算密度。 顯然,人口普查表是“ B01003_001E”。 這是按縣查詢該表的函數(shù):
def get_tract_population(state, county_code):population=pd.DataFrame(c.acs5.state_county_tract( 'B01003_001E', states.lookup(state).fips,'{}'.format(county_code),Census.ALL))population.rename(columns={'B01003_001E':'Total Population'}, inplace=True)population=population.loc[population['Total Population']!=0]return populationNow that we have a dataframe with population, and a geodataframe with tracts, we just need to merge them together:
現(xiàn)在,我們有了一個(gè)包含人口的數(shù)據(jù)框和一個(gè)具有區(qū)域的地理數(shù)據(jù)框,我們只需要將它們合并在一起:
def geometrize_census_table_tracts(state,county_code,table,densityColumn=None,left_on='TRACTCE10',right_on='tract'):tracts=get_county_tracts(state, county_code)geometrized_tracts=tracts.merge(table,left_on=left_on,right_on=right_on)if densityColumn:geometrized_tracts['Density']=geometrized_tracts[densityColumn]/(geometrized_tracts['ALAND10']/2589988.1103)return geometrized_tractsThis function is a little more generalized so that we can add geometries to other data besides population, as we’ll see later.
對(duì)該函數(shù)進(jìn)行了更概括的描述,以便我們可以將幾何體添加到人口總數(shù)以外的其他數(shù)據(jù)中,我們將在后面看到。
Now we can simply call our function and plot the results:
現(xiàn)在,我們可以簡單地調(diào)用函數(shù)并繪制結(jié)果:
ada_pop_tracts=geometrize_census_table_tracts(ada['state'],ada['county'],get_tract_population(ada['state'],ada['county']),'Total Population') ada_density_plot=ada_pop_tracts.plot(column='Density',legend=True,figsize=(17,11))king_pop_tracts=geometrize_census_table_tracts(king['state'],king['county'],get_tract_population(king['state'],king['county']),'Total Population') king_pop_tracts.plot(column='Density',legend=True,figsize=(17,11))建筑時(shí)代的混合 (Mix of Building Ages)
The next most complicated search is to find a variety of building ages within each tract. Luckily, the Census has some data that’s close enough. They track the age of housing within tracts by decade of construction. To start, we’ll make a dictionary out of these table names:
下一個(gè)最復(fù)雜的搜索是在每個(gè)區(qū)域中找到各種建筑年齡。 幸運(yùn)的是,人口普查擁有一些足夠接近的數(shù)據(jù)。 他們通過建造十年來追蹤房屋的使用年限。 首先,我們將從這些表名中創(chuàng)建一個(gè)字典:
housing_tables={'pre_39':'B25034_011E','1940-1949':'B25034_010E','1950-1959':'B25034_009E','1960-1969':'B25034_008E','1970-1979':'B25034_007E','1980-1989':'B25034_006E','1990-1999':'B25034_005E','2000-2009':'B25034_004E'}Next, create a function to combine all of these into a single dataframe. Since the Jane-Jacobsy-est tracts will be closest to equal across each decade, the easy metric for this is going to be the standard deviation, with the lowest being best:
接下來,創(chuàng)建一個(gè)函數(shù),將所有這些組合到一個(gè)數(shù)據(jù)框中。 由于Jane-Jacobsy-est區(qū)域在每個(gè)十年中都將最接近相等,因此,最簡單的度量標(biāo)準(zhǔn)將是標(biāo)準(zhǔn)差,而最低者為最佳:
def get_housing_age_diversity(state,county):cols=list(housing_tables.keys())cols.insert(0,'TRACTCE10')cols.insert(1,'geometry')out=get_county_tracts(state,county)for key, value in housing_tables.items():out=out.merge(pd.DataFrame(c.acs5.state_county_tract(value,states.lookup(state).fips,county,Census.ALL)),left_on='TRACTCE10',right_on='tract')out.rename(columns={value:key},inplace=True)out=out[cols]out['Standard Deviation']=out.std(axis=1)return outAgain, we simply call our function and plot the results:
同樣,我們只需調(diào)用函數(shù)并繪制結(jié)果即可:
ada_housing=get_housing_age_diversity(ada['state'],ada['county']) ada_housing.plot(column='Standard Deviation',legend=True,figsize=(17,11))king_housing=get_housing_age_diversity(king['state'],king['county']) king_housing.plot(column='Standard Deviation',legend=True,figsize=(17,11))相互連接的短塊網(wǎng)絡(luò) (A Network of short, interconnected blocks)
Now we start getting complicated. Luckily, we can get a head-start thanks to the osmnx Python package. We’ll use the graph_from_polygon function to get the street network within each Census tract, then the basic_stats package to get the average street length and average number of streets per intersection, or “nodes” in network analysis terms. However, before we do that, we need to fix one problem with our networks: OpenStreetMap counts parking lot drive-aisles as part of the street network, which is going to skew our results, as these tend to be relatively short, and at least connect in the interior of surface parking lots. To fix this, we’ll query all the parking lots in the county, then exclude them from our tracts to get some Swiss cheesy tracts. First, the function to query OSM for stuff, generalized as we’ll be using it heavily in the next section:
現(xiàn)在我們開始變得復(fù)雜。 幸運(yùn)的是, 借助osmnx Python軟件包,我們可以搶先一步 。 我們將使用graph_from_polygon函數(shù)來獲取每個(gè)人口普查區(qū)域內(nèi)的街道網(wǎng)絡(luò),然后使用basic_stats包來獲取平均街道長度和每個(gè)路口或網(wǎng)絡(luò)分析術(shù)語中的“節(jié)點(diǎn)”的平均街道數(shù)。 但是,在執(zhí)行此操作之前,我們需要解決網(wǎng)絡(luò)中的一個(gè)問題:OpenStreetMap將停車場的駕駛通道算作街道網(wǎng)絡(luò)的一部分,這會(huì)使我們的結(jié)果產(chǎn)生偏差,因?yàn)檫@些結(jié)果往往相對(duì)較短,并且至少連接在地面停車場的內(nèi)部。 為了解決這個(gè)問題,我們將查詢該縣的所有停車場,然后將其從我們的區(qū)域中排除,以獲得一些瑞士俗氣的區(qū)域。 首先,在OSM中查詢內(nèi)容的函數(shù),在下一節(jié)中將廣泛使用它:
def osm_query(area,elementType,feature_type,feature_name=None,poly_to_point=True):if feature_name:q=overpassQueryBuilder(area=nominatim.query(area).areaId(),elementType=elementType,selector='"{ft}"="{fn}"'.format(ft=feature_type,fn=feature_name),out='body',includeGeometry=True)else:q=overpassQueryBuilder(area=nominatim.query(area).areaId(),elementType=elementType,selector='"{ft}"'.format(ft=feature_type),out='body',includeGeometry=True)if len(overpass.query(q).toJSON()['elements'])>0:out=pd.DataFrame(overpass.query(q).toJSON()['elements'])if elementType=='node':out=gpd.GeoDataFrame(out,geometry=gpd.points_from_xy(out['lon'],out['lat']),crs=wgs)out=out.to_crs(merc)if elementType=='way':geometry=[]for i in out.geometry:geo=osm_way_to_polygon(i)geometry.append(geo)out.geometry=geometryout=gpd.GeoDataFrame(out,crs=wgs)out=out.to_crs(merc)if poly_to_point:out.geometry=out.geometry.centroidout=pd.concat([out.drop(['tags'],axis=1),out['tags'].apply(pd.Series)],axis=1)if elementType=='relation':out=pd.concat([out.drop(['members'],axis=1),out['members'].apply(pd.Series)[0].apply(pd.Series)],axis=1)geometry=[]for index, row in out.iterrows():row['geometry']=osm_way_to_polygon(row['geometry'])geometry.append(row['geometry'])out.geometry=geometryout=gpd.GeoDataFrame(out,crs=wgs)out=out.to_crs(merc)if poly_to_point:out.geometry=out.geometry.centroidout=out[['name','id','geometry']]if feature_name:out['type']= feature_nameelse:out['type']= feature_typeelse:out=pd.DataFrame(columns=['name','id','geometry','type'])return outTo get parking-less tracts:
要獲得免停車路段:
ada_tracts=get_county_tracts(ada['state'],ada['county']).to_crs(merc) ada_parking=osm_query('Ada County, ID','way','amenity','parking',poly_to_point=False) ada_tracts_parking=gpd.overlay(ada_tracts,ada_parking,how='symmetric_difference')king_tracts=get_county_tracts(king['state'],king['county']).to_crs(merc) king_parking=osm_query('king County, WA','way','amenity','parking',poly_to_point=False) king_tracts_parking=gpd.overlay(king_tracts,king_parking,how='symmetric_difference')This isn’t going to be a perfect solution as a lot of parking lots aren’t tagged as such, but it will at least exclude a lot of them. Now we can create a function to iterate over each tract and get a “street score” that I’m defining as the average length of streets within the tract divided by the number of streets per intersection:
這并不是一個(gè)完美的解決方案,因?yàn)樵S多停車場都沒有這樣的標(biāo)簽,但至少會(huì)排除很多停車場。 現(xiàn)在,我們可以創(chuàng)建一個(gè)函數(shù)來遍歷每個(gè)區(qū)域并獲得“街道分?jǐn)?shù)”,我將其定義為區(qū)域內(nèi)街道的平均長度除以每個(gè)路口的街道數(shù)量:
def score_streets(gdf):out=gpd.GeoDataFrame()i=1for index, row in gdf.iterrows():try:clear_output(wait=True)g=ox.graph_from_polygon(row['geometry'],network_type='walk')stats=ox.stats.basic_stats(g)row['street_score']=stats['street_length_avg']/stats['streets_per_node_avg']print('{}% complete'.format(round(((i/len(gdf))*100),2)))ox.plot_graph(g,node_size=0)out=out.append(row)i+=1except:continuereturn outThis one takes a while, so I included a progress bar and map output to keep me entertained while I wait There are also some tracts with no streets (I would assume the Puget Sound), hence the try/except. Now we call the function:
這需要一段時(shí)間,所以我添加了進(jìn)度條和地圖輸出,以使我在等待時(shí)保持娛樂。還有一些沒有街道的區(qū)域(我會(huì)假設(shè)為普吉特海灣),因此請(qǐng)嘗試/除外。 現(xiàn)在我們調(diào)用函數(shù):
ada_street_scores=score_streets(ada_tracts_parking.to_crs(wgs)) ada_street_scores.plot(column='street_score',legend=True,figsize=(17,11))king_street_scores=score_streets(king_tracts_parking.to_crs(wgs)) king_street_scores.plot(column='street_score',legend=True,figsize=(17,11))多種用途 (A mix of Uses)
Now for the most complicated portion of the analysis. Here’s my general plan:
現(xiàn)在進(jìn)行最復(fù)雜的分析。 這是我的總體計(jì)劃:
- Office 辦公室
- Park 公園
- Bar 酒吧
- Restaurant 餐廳
- Coffee shop 咖啡店
- Library 圖書館
- School 學(xué)校
- Bank 銀行
- Doctor’s office 醫(yī)生辦公室
- Pharmacy 藥店
- Post office 郵政局
- Grocery store 雜貨店
- Hardware store 五金店
2. Get a sample of points within each Census Tract
2.獲取每個(gè)人口普查區(qū)內(nèi)的點(diǎn)樣本
3. Count all the neighborhood essentials within walking distance of each sample point
3.計(jì)算每個(gè)采樣點(diǎn)步行距離內(nèi)的所有鄰域要素
4. Get an average of the number of essentials within walking distance for all the points in the tract.
4.獲取該區(qū)域中所有點(diǎn)在步行距離內(nèi)的必需品數(shù)量的平均值。
We’ll start with the osm_query function that I used to find parking lots above to get all the neighborhood essentials in a given geography. Since OSM is open source and editable, there are a few quirks to the data to work out. First, some people put point geographies for some things, while others put areas of the buildings. That’s why there’s the poly_to_point option in the function to standardize all of these to points if we want. The raw output of the Overpass API geometry is a dictionary of coordinates, so we need to convert those to shapely geometries in order to get fed into GeoPandas:
我們將從osm_query函數(shù)開始,該函數(shù)用于查找上方的停車場,以獲取給定地理區(qū)域中的所有鄰域要素。 由于OSM是開源的且可編輯的,因此需要對(duì)數(shù)據(jù)進(jìn)行一些修改。 首先,有些人對(duì)某些事物放置了點(diǎn)地理,而另一些人則對(duì)建筑物的區(qū)域進(jìn)行了地理定位。 這就是為什么函數(shù)中有poly_to_point選項(xiàng)可以將所有這些標(biāo)準(zhǔn)標(biāo)準(zhǔn)化為點(diǎn)(如果需要)的原因。 Overpass API幾何圖形的原始輸出是一個(gè)坐標(biāo)字典,因此我們需要將其轉(zhuǎn)換為形狀幾何圖形,以便輸入到GeoPandas中:
def osm_way_to_polygon(way):points=list()for p in range(len(way)):point=Point(way[p]['lon'],way[p]['lat'])points.append(point)poly=Polygon([[p.x, p.y] for p in points])return polyWe want these to come out in a single column, so we combine the outputs:
我們希望它們在同一列中列出,因此我們將輸出合并:
def combine_osm_features(name,feature_type,feature_name=None):df=pd.concat([osm_query(name,'node',feature_type,feature_name),osm_query(name,'way',feature_type,feature_name)])return dfNow we’re finally ready to get our neighborhood essentials:
現(xiàn)在,我們終于準(zhǔn)備好獲取我們附近的必需品:
def get_key_features(name):df=pd.concat([combine_osm_features(name,'office'),combine_osm_features(name,'leisure','park')])amenities=['bar','restaurant','cafe','library','school','bank','clinic','hospital','pharmacy','post_office']shops=['supermarket','hardware','doityourself']for a in amenities:df=pd.concat([df,combine_osm_features(name,'amenity',a)])for s in shops:df=pd.concat([df,combine_osm_features(name,'shop',s)])df=df.replace('doityourself','hardware')return gpd.GeoDataFrame(df,crs=merc)Next, we need to get a bunch of random points to search from:
接下來,我們需要從中搜索一堆隨機(jī)點(diǎn):
def random_sample_points(poly,npoints=10,tract_col='TRACTCE10'):min_x,min_y,max_x,max_y=poly.geometry.total_boundspoints=[]tracts=[]i=0while i < npoints:point=Point(random.uniform(min_x,max_x),random.uniform(min_y,max_y))if poly.geometry.contains(point).iloc[0]:points.append(point)tracts.append(poly[tract_col].iloc[0])i+=1out=gpd.GeoDataFrame({tract_col:tracts,'geometry':points},crs=poly.crs)return outNext, we’ll buffer our points by our walkable distance, which I set at 1 km. If we wanted to get really fancy, we’d use walksheds instead, but this analysis is processor heavy enough as it is, so I’m going to opt to stick with euclidian distances. We’ll then grab all the neighborhood essentials within the buffer area, and calculate a percentage of the essentials that are within walking distance:
接下來,我們將根據(jù)我們的步行距離(我設(shè)定為1公里)緩沖點(diǎn)。 如果我們真的想花哨的話,我們會(huì)改用步行棚,但是這種分析實(shí)際上要占用大量處理器,因此,我將選擇保持歐氏距離。 然后,我們將獲取緩沖區(qū)內(nèi)的所有鄰近要素,并計(jì)算步行距離之內(nèi)的要素的百分比:
def calculate_nearbyness_tract(tract,features,npoints=10,buffer_dist=1000):points=random_sample_points(tract,npoints).to_crs(merc)points.geometry=points.geometry.buffer(buffer_dist)cols=features['type'].unique().tolist()out=gpd.GeoDataFrame()i=1for index, row in points.iterrows():row['point_id']=ir=gpd.GeoDataFrame(pd.DataFrame(row).T,crs=points.crs,geometry='geometry').to_crs(merc)gdf=gpd.overlay(features,r,how='intersection')out=out.append(gdf)i+=1out=out.groupby(['point_id','type','TRACTCE10'],as_index=False).count()out=out.pivot(['point_id','TRACTCE10'],'type','name')out['nearby']=(out.notnull().sum(axis=1))/len(cols)out=pd.DataFrame(out.mean(axis=0,numeric_only=True)).Tout.insert(0,'tract',tract['TRACTCE10'].iloc[0],True)return outThat gets us the “nearbyness” of one tract. We now need to iterate over all the tracts in the county:
這使我們獲得了一個(gè)道的“附近”。 現(xiàn)在,我們需要遍歷該縣的所有區(qū)域:
def calculate_nearbyness(gdf,features,npoints=10,buffer_dist=1000):out=pd.DataFrame()cols=features['type'].unique().tolist()for index, row in gdf.iterrows():r=gpd.GeoDataFrame(pd.DataFrame(row).T,crs=gdf.crs,geometry='geometry')near=calculate_nearbyness_tract(r,features,npoints,buffer_dist)out=out.append(near)cols.insert(0,'tract')cols.append('nearby')out.drop(out.columns.difference(cols),1,inplace=True)return outNow we can call our functions to get our analysis:
現(xiàn)在我們可以調(diào)用函數(shù)來進(jìn)行分析:
ada_features=get_key_features(ada['name']).to_crs(merc) ada_nearby=calculate_nearbyness(ada_tracts,ada_features) geometrize_census_table_tracts(ada['state'],ada['county'],ada_nearby).plot(column='nearby',legend=True,figsize=(11,17))king_features=get_key_features(king['name']).to_crs(merc) king_nearby=calculate_nearbyness(king_tracts,king_features) geometrize_census_table_tracts(king['state'],king['county'],king_nearby).plot(column='nearby',legend=True,figsize=(11,17))全部放在一起 (Putting it all Together)
We now have a score for each of Jane Jacobs’ factors for a quality neighborhood. I’m more interested in comparing tracts within counties than comparing the counties themselves, so I’m going to simply rank each tract on their scores and take an average to get to the “Jane Jacobs Index” (JJI):
現(xiàn)在,我們?yōu)镴ane Jacobs的每個(gè)優(yōu)質(zhì)鄰里因素提供了得分。 我對(duì)比較縣內(nèi)的地區(qū)比對(duì)縣本身進(jìn)行比較更感興趣,因此,我將簡單地對(duì)每個(gè)地區(qū)的得分進(jìn)行排名,并取平均值以得出“簡·雅各布斯指數(shù)”(JJI):
def jane_jacobs_index(density,housing_age,mix,streets,merge_col='TRACTCE10'):df=density.merge(housing_age,on=merge_col).merge(mix,on='tract').merge(streets,on=merge_col)df['street_rank']=df['street_score'].rank(ascending=True,na_option='bottom')df['nearby_rank']=df['nearby'].rank(ascending=False,na_option='top')df['housing_rank']=df['Standard Deviation'].rank(ascending=True,na_option='bottom')df['density_rank']=df['Density'].rank(ascending=False,na_option='top')df=df[['TRACTCE10','street_rank','nearby_rank','housing_rank','density_rank']]df['JJI']=df.mean(axis=1)return(df)To see what we’ve made, we’ll call the function using the four dataframes we made earlier:
為了了解我們所做的事情,我們將使用我們之前制作的四個(gè)數(shù)據(jù)框來調(diào)用該函數(shù):
ada_jji=jane_jacobs_index(ada_pop_tracts,ada_housing,ada_nearby,ada_street_scores) ada_jji=geometrize_census_table_tracts(ada['state'],ada['county'],ada_jji,right_on='TRACTCE10') ada_jji.plot(column='JJI',legend=True, figsize=(17,11))king_jji=jane_jacobs_index(king_pop_tracts,king_housing,king_nearby,king_street_scores) king_jji=geometrize_census_table_tracts(king['state'],king['county'],king_jji,right_on='TRACTCE10') king_jji.plot(column='JJI',legend=True, figsize=(17,11))Finally, for cool points, we’ll use Folium to create an interactive map:
最后,對(duì)于酷點(diǎn),我們將使用Folium創(chuàng)建一個(gè)交互式地圖:
ada_map=folium.Map(location=[43.4595119,-116.524329],zoom_start=10) folium.Choropleth(geo_data=ada_jji,data=ada_jji,columns=['TRACTCE10','JJI'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,name='Jane Jacobs Index',legend_name='Jane Jacobs Index',line_weight=.2).add_to(ada_map) folium.Choropleth(geo_data=ada_jji,data=ada_jji,columns=['TRACTCE10','street_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Street Rank',legend_name='Street Rank',line_weight=.2).add_to(ada_map) folium.Choropleth(geo_data=ada_jji,data=ada_jji,columns=['TRACTCE10','nearby_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Nearby Rank',legend_name='Nearby Rank',line_weight=.2).add_to(ada_map) folium.Choropleth(geo_data=ada_jji,data=ada_jji,columns=['TRACTCE10','housing_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Housing Age Rank',legend_name='Housing Age Rank',line_weight=.2).add_to(ada_map) folium.Choropleth(geo_data=ada_jji,data=ada_jji,columns=['TRACTCE10','density_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Density Rank',legend_name='Density Rank',line_weight=.2).add_to(ada_map) folium.LayerControl(collapsed=False).add_to(ada_map) ada_map.save('ada_map.html')king_map=folium.Map(location=[47.4310271,-122.3638018],zoom_start=9) folium.Choropleth(geo_data=king_jji,data=king_jji,columns=['TRACTCE10','JJI'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,name='Jane Jacobs Index',legend_name='Jane Jacobs Index',line_weight=.2).add_to(king_map) folium.Choropleth(geo_data=king_jji,data=king_jji,columns=['TRACTCE10','street_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Street Rank',legend_name='Street Rank',line_weight=.2).add_to(king_map) folium.Choropleth(geo_data=king_jji,data=king_jji,columns=['TRACTCE10','nearby_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Nearby Rank',legend_name='Nearby Rank',line_weight=.2).add_to(king_map) folium.Choropleth(geo_data=king_jji,data=king_jji,columns=['TRACTCE10','housing_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Housing Age Rank',legend_name='Housing Age Rank',line_weight=.2).add_to(king_map) folium.Choropleth(geo_data=king_jji,data=king_jji,columns=['TRACTCE10','density_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Density Rank',legend_name='Density Rank',line_weight=.2).add_to(king_map) folium.LayerControl(collapsed=False).add_to(king_map) king_map.save('king_map.html')Here are links to the two newly created maps:
這是兩個(gè)新創(chuàng)建的地圖的鏈接:
Ada County
阿達(dá)縣
King County
金縣
What’s it all mean? We’ll dive into that in Part II…
什么意思 我們將在第二部分中深入探討……
翻譯自: https://towardsdatascience.com/how-jane-jacobs-y-is-your-neighborhood-65d678001c0d
總結(jié)
以上是生活随笔為你收集整理的简·雅各布斯(yane jacobs y)在你附近的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 超级实用开车技巧
- 下一篇: Java 8 新特性Lambda 表达式