日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

简·雅各布斯(yane jacobs y)在你附近

發布時間:2024/3/7 编程问答 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 简·雅各布斯(yane jacobs y)在你附近 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

In Death and Life of Great American Cities, the great Jane Jacobs lays out four essential characteristics of a great neighborhood:

在《美國大城市的死亡與生活》中 ,偉大的簡·雅各布斯闡述了一個大社區的四個基本特征:

  • Density

    密度
  • A mix of uses

    多種用途
  • A mix of building ages, types and conditions

    混合建筑年齡,類型和條件
  • A street network of short, connected blocks

    短而相連的街區的街道網絡

Of course, she goes into much greater detail on all of these, but I’m not going to get into all the eyes-on-the-street level stuff. Instead, I’m going to find neighborhoods with the right “bones” to build great urbanism onto. The caveat to this, as with most geospatial planning tools, is that it is not to be blindly trusted. There are a lot of details that need on the ground attention.

當然,她會在所有這些方面進行更詳細的介紹,但我不會涉及所有在街頭關注的內容。 取而代之的是,我將尋找具有正確“骨骼”的街區,以在其上建立偉大的城市主義。 與大多數地理空間規劃工具一樣,對此的警告是,不要盲目地信任它。 有很多需要地面關注的細節。

On to the data.

關于數據。

工具類 (Tools)

For this project, I’m going to use the following import statement:

對于此項目,我將使用以下import語句:

To start a session with the Census API, you need to give it a key (get one here). I’m also going to start up my OSM tools, define a couple projections, and create some dictionaries for the locations I’m interested in, for convenience:

要開始使用Census API進行會話,您需要為其提供一個密鑰( 在此處獲取一個)。 為了方便起見,我還將啟動OSM工具,定義幾個投影,并為我感興趣的位置創建一些字典:

census_api_key='50eb4e527e6c123fc8230117b3b526e1055ee8da' nominatim=Nominatim() overpass=Overpass() c=Census(census_api_key) wgs='EPSG:4326' merc='EPSG:3857'ada={'state':'ID','county':'001','name':'Ada County, ID'} king={'state':'WA','county':'033','name':'King County, WA'}

These two are interesting because they have both seen significant post-war growth and have a broad spectrum of development patterns. As a former Boise resident, I know Ada County well, and can provide on-the-ground insights. King County has a robust data platform that will allow for a different set of insights in part II of this analysis.

這兩個很有趣,因為它們都在戰后取得了長足的發展,并擁有廣泛的發展模式。 作為博伊西省的前居民,我非常了解Ada縣,并且可以提供實地見解。 金縣擁有強大的數據平臺,該平臺將在本分析的第二部分中提供不同的見解。

密度 (Density)

We’ll start off easy. The U.S. Census Bureau publishes population estimates regularly, so we just need to put those to some geometry and see how many people live in different areas. The smallest geography available for all the data that I’m going to use is the tract, so that’s what we’ll get.

我們將從簡單開始。 美國人口普查局會定期發布人口估算值,因此我們只需要將其估算為某種幾何形狀,并查看有多少人居住在不同地區。 對于我將要使用的所有數據,可用的最小地理區域是區域,這就是我們要得到的。

def get_county_tracts(state, county_code):state_shapefile=gpd.read_file(states.lookup(state).shapefile_urls('tract'))county_shapefile=state_shapefile.loc[state_shapefile['COUNTYFP10']==county_code]return county_shapefile

Now that I have the geography, I just need to get the population to calculate the density. The Census table for that is ‘B01003_001E,’ obviously. Here’s the function for querying that table by county:

現在我有了地理,我只需要獲取人口來計算密度。 顯然,人口普查表是“ B01003_001E”。 這是按縣查詢該表的函數:

def get_tract_population(state, county_code):population=pd.DataFrame(c.acs5.state_county_tract( 'B01003_001E', states.lookup(state).fips,'{}'.format(county_code),Census.ALL))population.rename(columns={'B01003_001E':'Total Population'}, inplace=True)population=population.loc[population['Total Population']!=0]return population

Now that we have a dataframe with population, and a geodataframe with tracts, we just need to merge them together:

現在,我們有了一個包含人口的數據框和一個具有區域的地理數據框,我們只需要將它們合并在一起:

def geometrize_census_table_tracts(state,county_code,table,densityColumn=None,left_on='TRACTCE10',right_on='tract'):tracts=get_county_tracts(state, county_code)geometrized_tracts=tracts.merge(table,left_on=left_on,right_on=right_on)if densityColumn:geometrized_tracts['Density']=geometrized_tracts[densityColumn]/(geometrized_tracts['ALAND10']/2589988.1103)return geometrized_tracts

This function is a little more generalized so that we can add geometries to other data besides population, as we’ll see later.

對該函數進行了更概括的描述,以便我們可以將幾何體添加到人口總數以外的其他數據中,我們將在后面看到。

Now we can simply call our function and plot the results:

現在,我們可以簡單地調用函數并繪制結果:

ada_pop_tracts=geometrize_census_table_tracts(ada['state'],ada['county'],get_tract_population(ada['state'],ada['county']),'Total Population') ada_density_plot=ada_pop_tracts.plot(column='Density',legend=True,figsize=(17,11))king_pop_tracts=geometrize_census_table_tracts(king['state'],king['county'],get_tract_population(king['state'],king['county']),'Total Population') king_pop_tracts.plot(column='Density',legend=True,figsize=(17,11))

建筑時代的混合 (Mix of Building Ages)

The next most complicated search is to find a variety of building ages within each tract. Luckily, the Census has some data that’s close enough. They track the age of housing within tracts by decade of construction. To start, we’ll make a dictionary out of these table names:

下一個最復雜的搜索是在每個區域中找到各種建筑年齡。 幸運的是,人口普查擁有一些足夠接近的數據。 他們通過建造十年來追蹤房屋的使用年限。 首先,我們將從這些表名中創建一個字典:

housing_tables={'pre_39':'B25034_011E','1940-1949':'B25034_010E','1950-1959':'B25034_009E','1960-1969':'B25034_008E','1970-1979':'B25034_007E','1980-1989':'B25034_006E','1990-1999':'B25034_005E','2000-2009':'B25034_004E'}

Next, create a function to combine all of these into a single dataframe. Since the Jane-Jacobsy-est tracts will be closest to equal across each decade, the easy metric for this is going to be the standard deviation, with the lowest being best:

接下來,創建一個函數,將所有這些組合到一個數據框中。 由于Jane-Jacobsy-est區域在每個十年中都將最接近相等,因此,最簡單的度量標準將是標準差,而最低者為最佳:

def get_housing_age_diversity(state,county):cols=list(housing_tables.keys())cols.insert(0,'TRACTCE10')cols.insert(1,'geometry')out=get_county_tracts(state,county)for key, value in housing_tables.items():out=out.merge(pd.DataFrame(c.acs5.state_county_tract(value,states.lookup(state).fips,county,Census.ALL)),left_on='TRACTCE10',right_on='tract')out.rename(columns={value:key},inplace=True)out=out[cols]out['Standard Deviation']=out.std(axis=1)return out

Again, we simply call our function and plot the results:

同樣,我們只需調用函數并繪制結果即可:

ada_housing=get_housing_age_diversity(ada['state'],ada['county']) ada_housing.plot(column='Standard Deviation',legend=True,figsize=(17,11))king_housing=get_housing_age_diversity(king['state'],king['county']) king_housing.plot(column='Standard Deviation',legend=True,figsize=(17,11))

相互連接的短塊網絡 (A Network of short, interconnected blocks)

Now we start getting complicated. Luckily, we can get a head-start thanks to the osmnx Python package. We’ll use the graph_from_polygon function to get the street network within each Census tract, then the basic_stats package to get the average street length and average number of streets per intersection, or “nodes” in network analysis terms. However, before we do that, we need to fix one problem with our networks: OpenStreetMap counts parking lot drive-aisles as part of the street network, which is going to skew our results, as these tend to be relatively short, and at least connect in the interior of surface parking lots. To fix this, we’ll query all the parking lots in the county, then exclude them from our tracts to get some Swiss cheesy tracts. First, the function to query OSM for stuff, generalized as we’ll be using it heavily in the next section:

現在我們開始變得復雜。 幸運的是, 借助osmnx Python軟件包,我們可以搶先一步 。 我們將使用graph_from_polygon函數來獲取每個人口普查區域內的街道網絡,然后使用basic_stats包來獲取平均街道長度和每個路口或網絡分析術語中的“節點”的平均街道數。 但是,在執行此操作之前,我們需要解決網絡中的一個問題:OpenStreetMap將停車場的駕駛通道算作街道網絡的一部分,這會使我們的結果產生偏差,因為這些結果往往相對較短,并且至少連接在地面停車場的內部。 為了解決這個問題,我們將查詢該縣的所有停車場,然后將其從我們的區域中排除,以獲得一些瑞士俗氣的區域。 首先,在OSM中查詢內容的函數,在下一節中將廣泛使用它:

def osm_query(area,elementType,feature_type,feature_name=None,poly_to_point=True):if feature_name:q=overpassQueryBuilder(area=nominatim.query(area).areaId(),elementType=elementType,selector='"{ft}"="{fn}"'.format(ft=feature_type,fn=feature_name),out='body',includeGeometry=True)else:q=overpassQueryBuilder(area=nominatim.query(area).areaId(),elementType=elementType,selector='"{ft}"'.format(ft=feature_type),out='body',includeGeometry=True)if len(overpass.query(q).toJSON()['elements'])>0:out=pd.DataFrame(overpass.query(q).toJSON()['elements'])if elementType=='node':out=gpd.GeoDataFrame(out,geometry=gpd.points_from_xy(out['lon'],out['lat']),crs=wgs)out=out.to_crs(merc)if elementType=='way':geometry=[]for i in out.geometry:geo=osm_way_to_polygon(i)geometry.append(geo)out.geometry=geometryout=gpd.GeoDataFrame(out,crs=wgs)out=out.to_crs(merc)if poly_to_point:out.geometry=out.geometry.centroidout=pd.concat([out.drop(['tags'],axis=1),out['tags'].apply(pd.Series)],axis=1)if elementType=='relation':out=pd.concat([out.drop(['members'],axis=1),out['members'].apply(pd.Series)[0].apply(pd.Series)],axis=1)geometry=[]for index, row in out.iterrows():row['geometry']=osm_way_to_polygon(row['geometry'])geometry.append(row['geometry'])out.geometry=geometryout=gpd.GeoDataFrame(out,crs=wgs)out=out.to_crs(merc)if poly_to_point:out.geometry=out.geometry.centroidout=out[['name','id','geometry']]if feature_name:out['type']= feature_nameelse:out['type']= feature_typeelse:out=pd.DataFrame(columns=['name','id','geometry','type'])return out

To get parking-less tracts:

要獲得免停車路段:

ada_tracts=get_county_tracts(ada['state'],ada['county']).to_crs(merc) ada_parking=osm_query('Ada County, ID','way','amenity','parking',poly_to_point=False) ada_tracts_parking=gpd.overlay(ada_tracts,ada_parking,how='symmetric_difference')king_tracts=get_county_tracts(king['state'],king['county']).to_crs(merc) king_parking=osm_query('king County, WA','way','amenity','parking',poly_to_point=False) king_tracts_parking=gpd.overlay(king_tracts,king_parking,how='symmetric_difference')

This isn’t going to be a perfect solution as a lot of parking lots aren’t tagged as such, but it will at least exclude a lot of them. Now we can create a function to iterate over each tract and get a “street score” that I’m defining as the average length of streets within the tract divided by the number of streets per intersection:

這并不是一個完美的解決方案,因為許多停車場都沒有這樣的標簽,但至少會排除很多停車場。 現在,我們可以創建一個函數來遍歷每個區域并獲得“街道分數”,我將其定義為區域內街道的平均長度除以每個路口的街道數量:

def score_streets(gdf):out=gpd.GeoDataFrame()i=1for index, row in gdf.iterrows():try:clear_output(wait=True)g=ox.graph_from_polygon(row['geometry'],network_type='walk')stats=ox.stats.basic_stats(g)row['street_score']=stats['street_length_avg']/stats['streets_per_node_avg']print('{}% complete'.format(round(((i/len(gdf))*100),2)))ox.plot_graph(g,node_size=0)out=out.append(row)i+=1except:continuereturn out

This one takes a while, so I included a progress bar and map output to keep me entertained while I wait There are also some tracts with no streets (I would assume the Puget Sound), hence the try/except. Now we call the function:

這需要一段時間,所以我添加了進度條和地圖輸出,以使我在等待時保持娛樂。還有一些沒有街道的區域(我會假設為普吉特海灣),因此請嘗試/除外。 現在我們調用函數:

ada_street_scores=score_streets(ada_tracts_parking.to_crs(wgs)) ada_street_scores.plot(column='street_score',legend=True,figsize=(17,11))king_street_scores=score_streets(king_tracts_parking.to_crs(wgs)) king_street_scores.plot(column='street_score',legend=True,figsize=(17,11))

多種用途 (A mix of Uses)

Now for the most complicated portion of the analysis. Here’s my general plan:

現在進行最復雜的分析。 這是我的總體計劃:

  • Query OpenStreetMap for all the components of necessary for a “15 minute neighborhood”:

    查詢OpenStreetMap以獲取“ 15分鐘鄰域”所需的所有組件:
    • Office

      辦公室
    • Park

      公園
    • Bar

      酒吧
    • Restaurant

      餐廳
    • Coffee shop

      咖啡店
    • Library

      圖書館
    • School

      學校
    • Bank

      銀行
    • Doctor’s office

      醫生辦公室
    • Pharmacy

      藥店
    • Post office

      郵政局
    • Grocery store

      雜貨店
    • Hardware store

      五金店

    2. Get a sample of points within each Census Tract

    2.獲取每個人口普查區內的點樣本

    3. Count all the neighborhood essentials within walking distance of each sample point

    3.計算每個采樣點步行距離內的所有鄰域要素

    4. Get an average of the number of essentials within walking distance for all the points in the tract.

    4.獲取該區域中所有點在步行距離內的必需品數量的平均值。

    We’ll start with the osm_query function that I used to find parking lots above to get all the neighborhood essentials in a given geography. Since OSM is open source and editable, there are a few quirks to the data to work out. First, some people put point geographies for some things, while others put areas of the buildings. That’s why there’s the poly_to_point option in the function to standardize all of these to points if we want. The raw output of the Overpass API geometry is a dictionary of coordinates, so we need to convert those to shapely geometries in order to get fed into GeoPandas:

    我們將從osm_query函數開始,該函數用于查找上方的停車場,以獲取給定地理區域中的所有鄰域要素。 由于OSM是開源的且可編輯的,因此需要對數據進行一些修改。 首先,有些人對某些事物放置了點地理,而另一些人則對建筑物的區域進行了地理定位。 這就是為什么函數中有poly_to_point選項可以將所有這些標準標準化為點(如果需要)的原因。 Overpass API幾何圖形的原始輸出是一個坐標字典,因此我們需要將其轉換為形狀幾何圖形,以便輸入到GeoPandas中:

    def osm_way_to_polygon(way):points=list()for p in range(len(way)):point=Point(way[p]['lon'],way[p]['lat'])points.append(point)poly=Polygon([[p.x, p.y] for p in points])return poly

    We want these to come out in a single column, so we combine the outputs:

    我們希望它們在同一列中列出,因此我們將輸出合并:

    def combine_osm_features(name,feature_type,feature_name=None):df=pd.concat([osm_query(name,'node',feature_type,feature_name),osm_query(name,'way',feature_type,feature_name)])return df

    Now we’re finally ready to get our neighborhood essentials:

    現在,我們終于準備好獲取我們附近的必需品:

    def get_key_features(name):df=pd.concat([combine_osm_features(name,'office'),combine_osm_features(name,'leisure','park')])amenities=['bar','restaurant','cafe','library','school','bank','clinic','hospital','pharmacy','post_office']shops=['supermarket','hardware','doityourself']for a in amenities:df=pd.concat([df,combine_osm_features(name,'amenity',a)])for s in shops:df=pd.concat([df,combine_osm_features(name,'shop',s)])df=df.replace('doityourself','hardware')return gpd.GeoDataFrame(df,crs=merc)

    Next, we need to get a bunch of random points to search from:

    接下來,我們需要從中搜索一堆隨機點:

    def random_sample_points(poly,npoints=10,tract_col='TRACTCE10'):min_x,min_y,max_x,max_y=poly.geometry.total_boundspoints=[]tracts=[]i=0while i < npoints:point=Point(random.uniform(min_x,max_x),random.uniform(min_y,max_y))if poly.geometry.contains(point).iloc[0]:points.append(point)tracts.append(poly[tract_col].iloc[0])i+=1out=gpd.GeoDataFrame({tract_col:tracts,'geometry':points},crs=poly.crs)return out

    Next, we’ll buffer our points by our walkable distance, which I set at 1 km. If we wanted to get really fancy, we’d use walksheds instead, but this analysis is processor heavy enough as it is, so I’m going to opt to stick with euclidian distances. We’ll then grab all the neighborhood essentials within the buffer area, and calculate a percentage of the essentials that are within walking distance:

    接下來,我們將根據我們的步行距離(我設定為1公里)緩沖點。 如果我們真的想花哨的話,我們會改用步行棚,但是這種分析實際上要占用大量處理器,因此,我將選擇保持歐氏距離。 然后,我們將獲取緩沖區內的所有鄰近要素,并計算步行距離之內的要素的百分比:

    def calculate_nearbyness_tract(tract,features,npoints=10,buffer_dist=1000):points=random_sample_points(tract,npoints).to_crs(merc)points.geometry=points.geometry.buffer(buffer_dist)cols=features['type'].unique().tolist()out=gpd.GeoDataFrame()i=1for index, row in points.iterrows():row['point_id']=ir=gpd.GeoDataFrame(pd.DataFrame(row).T,crs=points.crs,geometry='geometry').to_crs(merc)gdf=gpd.overlay(features,r,how='intersection')out=out.append(gdf)i+=1out=out.groupby(['point_id','type','TRACTCE10'],as_index=False).count()out=out.pivot(['point_id','TRACTCE10'],'type','name')out['nearby']=(out.notnull().sum(axis=1))/len(cols)out=pd.DataFrame(out.mean(axis=0,numeric_only=True)).Tout.insert(0,'tract',tract['TRACTCE10'].iloc[0],True)return out

    That gets us the “nearbyness” of one tract. We now need to iterate over all the tracts in the county:

    這使我們獲得了一個道的“附近”。 現在,我們需要遍歷該縣的所有區域:

    def calculate_nearbyness(gdf,features,npoints=10,buffer_dist=1000):out=pd.DataFrame()cols=features['type'].unique().tolist()for index, row in gdf.iterrows():r=gpd.GeoDataFrame(pd.DataFrame(row).T,crs=gdf.crs,geometry='geometry')near=calculate_nearbyness_tract(r,features,npoints,buffer_dist)out=out.append(near)cols.insert(0,'tract')cols.append('nearby')out.drop(out.columns.difference(cols),1,inplace=True)return out

    Now we can call our functions to get our analysis:

    現在我們可以調用函數來進行分析:

    ada_features=get_key_features(ada['name']).to_crs(merc) ada_nearby=calculate_nearbyness(ada_tracts,ada_features) geometrize_census_table_tracts(ada['state'],ada['county'],ada_nearby).plot(column='nearby',legend=True,figsize=(11,17))king_features=get_key_features(king['name']).to_crs(merc) king_nearby=calculate_nearbyness(king_tracts,king_features) geometrize_census_table_tracts(king['state'],king['county'],king_nearby).plot(column='nearby',legend=True,figsize=(11,17))

    全部放在一起 (Putting it all Together)

    We now have a score for each of Jane Jacobs’ factors for a quality neighborhood. I’m more interested in comparing tracts within counties than comparing the counties themselves, so I’m going to simply rank each tract on their scores and take an average to get to the “Jane Jacobs Index” (JJI):

    現在,我們為Jane Jacobs的每個優質鄰里因素提供了得分。 我對比較縣內的地區比對縣本身進行比較更感興趣,因此,我將簡單地對每個地區的得分進行排名,并取平均值以得出“簡·雅各布斯指數”(JJI):

    def jane_jacobs_index(density,housing_age,mix,streets,merge_col='TRACTCE10'):df=density.merge(housing_age,on=merge_col).merge(mix,on='tract').merge(streets,on=merge_col)df['street_rank']=df['street_score'].rank(ascending=True,na_option='bottom')df['nearby_rank']=df['nearby'].rank(ascending=False,na_option='top')df['housing_rank']=df['Standard Deviation'].rank(ascending=True,na_option='bottom')df['density_rank']=df['Density'].rank(ascending=False,na_option='top')df=df[['TRACTCE10','street_rank','nearby_rank','housing_rank','density_rank']]df['JJI']=df.mean(axis=1)return(df)

    To see what we’ve made, we’ll call the function using the four dataframes we made earlier:

    為了了解我們所做的事情,我們將使用我們之前制作的四個數據框來調用該函數:

    ada_jji=jane_jacobs_index(ada_pop_tracts,ada_housing,ada_nearby,ada_street_scores) ada_jji=geometrize_census_table_tracts(ada['state'],ada['county'],ada_jji,right_on='TRACTCE10') ada_jji.plot(column='JJI',legend=True, figsize=(17,11))king_jji=jane_jacobs_index(king_pop_tracts,king_housing,king_nearby,king_street_scores) king_jji=geometrize_census_table_tracts(king['state'],king['county'],king_jji,right_on='TRACTCE10') king_jji.plot(column='JJI',legend=True, figsize=(17,11))

    Finally, for cool points, we’ll use Folium to create an interactive map:

    最后,對于酷點,我們將使用Folium創建一個交互式地圖:

    ada_map=folium.Map(location=[43.4595119,-116.524329],zoom_start=10) folium.Choropleth(geo_data=ada_jji,data=ada_jji,columns=['TRACTCE10','JJI'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,name='Jane Jacobs Index',legend_name='Jane Jacobs Index',line_weight=.2).add_to(ada_map) folium.Choropleth(geo_data=ada_jji,data=ada_jji,columns=['TRACTCE10','street_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Street Rank',legend_name='Street Rank',line_weight=.2).add_to(ada_map) folium.Choropleth(geo_data=ada_jji,data=ada_jji,columns=['TRACTCE10','nearby_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Nearby Rank',legend_name='Nearby Rank',line_weight=.2).add_to(ada_map) folium.Choropleth(geo_data=ada_jji,data=ada_jji,columns=['TRACTCE10','housing_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Housing Age Rank',legend_name='Housing Age Rank',line_weight=.2).add_to(ada_map) folium.Choropleth(geo_data=ada_jji,data=ada_jji,columns=['TRACTCE10','density_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Density Rank',legend_name='Density Rank',line_weight=.2).add_to(ada_map) folium.LayerControl(collapsed=False).add_to(ada_map) ada_map.save('ada_map.html')king_map=folium.Map(location=[47.4310271,-122.3638018],zoom_start=9) folium.Choropleth(geo_data=king_jji,data=king_jji,columns=['TRACTCE10','JJI'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,name='Jane Jacobs Index',legend_name='Jane Jacobs Index',line_weight=.2).add_to(king_map) folium.Choropleth(geo_data=king_jji,data=king_jji,columns=['TRACTCE10','street_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Street Rank',legend_name='Street Rank',line_weight=.2).add_to(king_map) folium.Choropleth(geo_data=king_jji,data=king_jji,columns=['TRACTCE10','nearby_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Nearby Rank',legend_name='Nearby Rank',line_weight=.2).add_to(king_map) folium.Choropleth(geo_data=king_jji,data=king_jji,columns=['TRACTCE10','housing_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Housing Age Rank',legend_name='Housing Age Rank',line_weight=.2).add_to(king_map) folium.Choropleth(geo_data=king_jji,data=king_jji,columns=['TRACTCE10','density_rank'],fill_color='YlGnBu',key_on='feature.properties.TRACTCE10',highlight=True,show=False,name='Density Rank',legend_name='Density Rank',line_weight=.2).add_to(king_map) folium.LayerControl(collapsed=False).add_to(king_map) king_map.save('king_map.html')

    Here are links to the two newly created maps:

    這是兩個新創建的地圖的鏈接:

    Ada County

    阿達縣

    King County

    金縣

    What’s it all mean? We’ll dive into that in Part II…

    什么意思 我們將在第二部分中深入探討……

    翻譯自: https://towardsdatascience.com/how-jane-jacobs-y-is-your-neighborhood-65d678001c0d

    總結

    以上是生活随笔為你收集整理的简·雅各布斯(yane jacobs y)在你附近的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

    主站蜘蛛池模板: 国产三级三级看三级 | 久久春色 | 欧美一区二区三区成人片在线 | av免费网站| 国产精品久久久久9999爆乳 | 欧美猛交免费 | 一级全黄毛片 | 免费毛片网 | 亚色在线 | 91成人免费在线视频 | 亚洲国产精品免费 | 五月网婷婷 | 免费成人美女在线观看 | 三上悠亚痴汉电车 | 亚洲国产91| 国产草草视频 | 天天爱天天干天天操 | 中文字幕高潮 | www.欧美一区二区三区 | caoprom在线| 国产免费一区二区三区四区五区 | 日韩在线免费播放 | 午夜天堂在线 | 污网在线观看 | 男人av网| 性感美女视频一二三 | 亚洲视频二区 | 成人深夜电影 | 91免费网站在线观看 | 久久视频在线观看免费 | 国产亚洲三级 | 亚洲免费国产视频 | 国产小视频免费观看 | 2024国产精品 | av有码在线| 亚洲精品永久免费 | 羞羞答答av| 国产综合久久久久 | 欧美综合视频在线观看 | 午夜小影院 | 草久在线观看视频 | 欧美黄色录像 | 久久久久国产精 | www久久99| 丰满人妻av一区二区三区 | 91av精品 | 欧美激情3p | 69xxx国产| 一级a毛片| 影音先锋丝袜美腿 | 欧美日韩激情视频 | 国产伦精品一区二区三区免.费 | 美女极度色诱图片www视频 | 91国产丝袜播放在线 | 激情视频网站 | 欧美韩日一区二区 | 亚洲xxx视频| 伊人99re| 国产精品日日摸夜夜爽 | 久久91精品国产91久久小草 | 好吊妞视频在线观看 | 日韩一区二区三区四区 | 亚洲欧美激情视频 | 欧美 日韩 成人 | 三级全黄的视频 | 美味的客房沙龙服务 | 中文字幕一区在线播放 | 视色视频 | 国产精品99精品无码视亚 | 亚洲 小说区 图片区 都市 | 久久av高潮av无av萌白 | 久久久久国色av免费观看性色 | 免费一级特黄特色大片 | 日韩精品一区二区三区视频 | 国产欧美日韩综合精品一区二区三区 | 五月天婷婷基地 | 色臀av | 五月情网 | 免费黄色网址观看 | 草色网 | 中文字幕有码在线播放 | 99小视频 | 亚洲综合一区在线观看 | 欧美激情图片 | 午夜激情电影在线观看 | 一级小毛片 | 尤物在线免费视频 | 思思久久99 | 日本人妻伦在线中文字幕 | 成人性视频网站 | 国产又粗又猛又黄又爽视频 | 韩国美女被c | 欧美成人午夜免费视在线看片 | 中文字幕精品一区久久久久 | 91九色pron| 国产丝袜视频在线 | 中文字幕一区二区三区人妻 | 深夜在线免费视频 | 日韩在线中文 |